Quantcast
Channel: Oracle – Official Pythian® Blog
Viewing all 301 articles
Browse latest View live

Log Buffer #347, A Carnival of the Vanities for DBAs

$
0
0

What do swaying palms, turquoise water, white sandy beaches and absolutely pristine fauna remind you of? Correct! It’s Log Buffer. This Log Buffer brings you beads of blog posts related to data dexterity crafted by leading bloggers across the planet.

Oracle:

When are Exadata’s storage indexes used?

Oracle 12c has increased the maximum length of character-based columns to 32K bytes.

Oracle has extended the maximum length of varchar2, nvarchar and raw columns to 32K, but this comes with some challenges when it comes to indexing such columns.

Martin has applying PSU 12.1.0.1.1 in the lab environment.

It is easier to create one or two AWR reports quickly using OEM. But what if you have to create AWR reports for several snapshots?

SQL Server:

A demonstration of Power BI for Office 365, shows you how all the various tools and technologies work together.

Executing powershell script in a SQL Agent job – Host errors

Optimizing SQL Server Performance: Changing Your Settings

What Exactly Is This Sysadmin You Speak Of?

Automated Permissions Auditing With Powershell and T-SQL.

MySQL:

Here is a commentary on MySQL‘s slow query collection sources.

Integrating pt-online-schema-change with a Scripted Deployment

How to add VIPs to Percona XtraDB Cluster or MHA with Pacemaker.

The binary and source versions of MySQL Cluster 7.3.3 have now been made available.

Since MariaDB aims to be a compatible/drop-in replacement to MySQL, it’s crucial that in 10.0 it supports all the 5.6 options/system variables.


Missing UKOUG Tech13 — But Still in Manchester at OakTable World UK 2013

$
0
0

This is the first year since 2006 that I don’t present or attend the UKOUG Technology annual conference. Sad, but I had to withdraw because I didn’t believe UKOUG had been making the right choices in the past little while. The trigger was the introduction of the limit of six presentations per company to present at the conference. I believe this doesn’t really serve the conference, attendees or my company well (which I’m so passionate about,) nor my colleagues that I’m so proud to work with.

Pythian’s vision has always been to grow, as the place where the top experts in the database industry want to work. Naturally, many of the folks working at Pythian are also industry leaders and active community contributors. As Pythian has been steadily growing, more and more Pythianites are submitting abstracts to conferences such as UKOUG, which do get accepted being good abstracts by known speakers. UKOUG Tech13 wasn’t an exception — while there was not nearly dominating number of abstracts from Pythian accepted, it was still more than six.

It’s even more disappointing this year because this rule wasn’t communicated (or didn’t exist) earlier in abstract selection process when abstract reviewers were selecting the sessions. Seems like it was sort of added as an afterthought. This isn’t as much of my concern but more of those who actually did the reviews and now will be overridden by a rule that has nothing to do with the merits of abstracts and speakers they evaluated. Long story short, I got a request from UKOUG to moderate the sessions from Pythian to comply with this new limit of six session per company. While it sounds nice to give the company such choice, it goes against our culture of conference participation — it’s individuals’ choice and not corporate moderation. From Pythian perspective, there is no moderation for central abstract submissions and approvals for UKOUG — folks have their own individual budgets and conference allotments every year that they are free to control as long as it works with their teams and budgets. Thus, I couldn’t possibly decide who should be going and who should withdraw — it would break our culture.

This wouldn’t even matter because we would naturally had few speakers refuse sessions and in the end have less than six session anyway without any moderation. What’s important is that (1) the approach itself harms Pythian and it’s employees because people who join the company, have less chances to present at UKOUG that so many of us have been keeping in high regards; and (2) the conference agenda doesn’t get the best presentations independently submitted by speakers and then independently selected by abstract reviewers and selection committee.

While I appreciate the prompt response from UKOUG to my protest on this limit and readiness to explain the reasoning behind the decision, there was no interest in changing that late-added rule — my arguments were not strong enough it seems. I thought that the best course of action for me personally would be to withdraw my sessions this year, which is what I did. Needless to say it wasn’t an easy call to make because UKOUG conference is the first conference I attended and the first conference I presented on. It’s been a special kind of gathering for me and I haven’t missed a single one since 2006 no matter where I lived at a time of the conference. I wanted to share my decision on the blog but didn’t want to make it look too anti-UKOUG and potentially reduce conference attendance — while I’m unhappy with some of the decisions, I respect many people involved in UKOUG. This is why I’m writing this at 3am on Monday, just before the 2nd day of the conference.

I must say, that large part of my exciting UKOUG conference experience is meeting my good friends I’ve made over that many years, and also make new ones. The good news is that this part (at least partially) will still happen this year. While I don’t present at or attend UKOUG Tech13, I’ll be at OakTable Work UK 2013 on Monday and Tuesday, which is right across the road from UKOUG Tech13 conference and is open to all to come by. See the presentation from awesome speakers and network with the group of people they like (well, supposedly like).

So, this is sort of my personal protest to this absurd rule. I hope UKOUG will change their mind next year. By the way, Pythian is not the only company in such a position — there are other companies (not many, but you know who you are!) that are successfully growing and attracting top talent and are in the same boat with Pythian when it comes to this limit of six presentations per company. I don’t want to talk on their behalf, but I know some folks had to make their choices too.

Now you know where you can find me and some of the folks your won’t be seeing at UKOUG Tech13 this year – Premier Inn, 7-11 Lower Mosley Street, Manchester. Monday and Tuesday. In the evenings… Well, you know where you can find me too.

Online Storage Migration without ASM

$
0
0

I recently blogged about Playing with ASM Online Migration and still soul searching from not having jumped on the ASM band wagon.

How can online storage migration be performed before ASM came about? One option is using Logical Volume Manager (LVM) and I will demonstrate:

Display Volume Group

[root@lax ~]# vgdisplay –verbose vg02
Using volume group(s) on command line
Finding volume group “vg02″
— Volume group —
VG Name vg02
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.00 GiB
PE Size 4.00 MiB
Total PE 1279
Alloc PE / Size 1250 / 4.88 GiB
Free PE / Size 29 / 116.00 MiB
VG UUID 5xIHp0-ch9d-tdE9-FygL-nSvc-TMfz-SZ3BaI

— Logical volume —
LV Path /dev/vg02/lv_data01
LV Name lv_data01
VG Name vg02
LV UUID pZ1yuz-fzYm-eCmW-qzzs-rBCQ-Vohl-bc3Z3z
LV Write Access read/write
LV Creation host, time lax.localdomain, 2013-12-05 18:26:59 -0800
LV Status available
# open 1
LV Size 4.88 GiB
Current LE 1250
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2

— Physical volumes —
PV Name /dev/sdb
PV UUID 9lyp9r-w5cu-23u5-qIGj-AiLR-OuCk-Z5thvi
PV Status allocatable
Total PE / Free PE 1279 / 29

Display devices and Physical Volume

[root@lax ~]# ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sdb /dev/sdc
[root@lax ~]# pvdisplay /dev/sdb /dev/sdc
— Physical volume —
PV Name /dev/sdb
VG Name vg02
PV Size 5.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1279
Free PE 29
Allocated PE 1250
PV UUID 9lyp9r-w5cu-23u5-qIGj-AiLR-OuCk-Z5thvi

“/dev/sdc” is a new physical volume of “5.00 GiB”
— NEW Physical volume —
PV Name /dev/sdc
VG Name
PV Size 5.00 GiB

Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID ycKP5Z-7BLc-209d-9nWk-M3fr-RilZ-SP0w0h

Create Physical Volume for disk /dev/sdc

[root@lax ~]# pvcreate /dev/sdc
Writing physical volume data to disk “/dev/sdc”
Physical volume “/dev/sdc” successfully created

Extend Volumne Group to include disk /dev/sdc

[root@lax ~]# vgextend vg02 /dev/sdc
Volume group “vg02″ successfully extended

While moving physical volume from one device to another, the follow PL/SQL block is run:

begin
for i in 1..1000000 loop
insert into t values (i);
end loop;
end;
/

Move Physical Volume from /dev/sdb to /dev/sdc

[root@lax ~]# pvmove /dev/sdb /dev/sdc
/dev/sdb: Moved: 0.6%
/dev/sdb: Moved: 3.6%
/dev/sdb: Moved: 8.7%
/dev/sdb: Moved: 10.5%
/dev/sdb: Moved: 13.0%
/dev/sdb: Moved: 15.7%
/dev/sdb: Moved: 17.1%
/dev/sdb: Moved: 18.6%
/dev/sdb: Moved: 24.0%
/dev/sdb: Moved: 25.8%
/dev/sdb: Moved: 29.1%
/dev/sdb: Moved: 31.0%
/dev/sdb: Moved: 33.0%
/dev/sdb: Moved: 34.6%
/dev/sdb: Moved: 40.9%
/dev/sdb: Moved: 49.6%
/dev/sdb: Moved: 58.2%
/dev/sdb: Moved: 67.0%
/dev/sdb: Moved: 75.9%
/dev/sdb: Moved: 84.6%
/dev/sdb: Moved: 92.9%
/dev/sdb: Moved: 100.0%

Drop from Volume Group /dev/sdb

[root@lax ~]# vgreduce vg02 /dev/sdb
Removed “/dev/sdb” from volume group “vg02″

Verify Volume Group

[root@lax ~]# vgdisplay –verbose vg02
Using volume group(s) on command line
Finding volume group “vg02″
— Volume group —
VG Name vg02
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 31
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.00 GiB
PE Size 4.00 MiB
Total PE 1279
Alloc PE / Size 1250 / 4.88 GiB
Free PE / Size 29 / 116.00 MiB
VG UUID 5xIHp0-ch9d-tdE9-FygL-nSvc-TMfz-SZ3BaI

— Logical volume —
LV Path /dev/vg02/lv_data01
LV Name lv_data01
VG Name vg02
LV UUID pZ1yuz-fzYm-eCmW-qzzs-rBCQ-Vohl-bc3Z3z
LV Write Access read/write
LV Creation host, time lax.localdomain, 2013-12-05 18:26:59 -0800
LV Status available
# open 1
LV Size 4.88 GiB
Current LE 1250
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2

— Physical volumes —
PV Name /dev/sdc
PV UUID eXJJD0-ISW1-P7gg-aCfs-VRvj-5cj0-9xsyx0
PV Status allocatable
Total PE / Free PE 1279 / 29

[root@lax ~]#

Verify Transaction Count

LAX:(HR@db02)> select count(*) from t;

COUNT(*)
———-
1000000

LAX:(HR@db02)>

How to recover a subset of an Oracle database

$
0
0

Today’s blog post will discuss how to recover a subset of an Oracle database. Many of you would have come across different recovery scenarios, but I’ll be talking about a very different one that happened to me. The following are the details after receiving a call from a client, and checking the environment myself.

i) This is a data warehouse production database, which is sized around 5TB, running on 10gR2 version on AIX platform.

ii) Weekly level 0 and frequent archive log compressed RMAN tape backup(Netbackup) is configured for this database.

iii) Client DBA confirmed that the level 0 backup was executed successfully. and started his scheduled purge activity for older partitions. He accidentally deleted the partitions that belong to year 2011 instead of requested deletion for year 2010 partitions for a table (due to typo on notepad using copy/paste of earlier commands.)

v) He found the issue after executing the required sql script by verifying the log file. He decided not to delete the  associated datafiles that belonged to year 2011 tablespace from DB and OS level.

vi) The range partitions are used, based on monthly data, and dedicated tablespaces are used for a year partition data. There are more than 200 tablespaces, spread across multiple partitioned tables.

vii) There was no recycle bin (flashback drop) feature enabled and no flashback feature is configured. This database also doesn’t have any standby databases configured.

viii) The logical backup for this database has never happened, but block change tracking was enabled for this database.

So, I recently got completed level 0 tape backup only.These older partitions are useful only during month end reporting. The traditional method of restoring the entire database would be time prone and also with the need of 5TB additional storage.

Thanks to RMAN feature (Ref MOS Note:223543.1), we can restore only subset of database. Hence suggested client that I would  create a test database with only required tablespaces. From the existing datafiles (tablespace) at OS level, I found the approximate size of deleted partition tablespace was around 300G. Along with required SYSTEM/SYSSAUX/UNDO tablespaces, the required disk space for this test database was around 420G.

Client SA team created a new test server identifical with the existing server.Most of the required additional mount points were created as soft links on the existing disk space of 600G(we need space for archivelogs too). After cloning the existing oracle home to new server, I used the following steps to complete this recovery activity.

1. Logged into DW production database(db name: test – as usual) and identified the required tablespaces are ‘SYSTEM’,'SYSAUX’,'UNDOTBS’ and ‘TEST_DETAIL_2011_TS’. Also identified the associated datafiles numbers. For eg. file_id’s 1,2,3,149,163,164,149,106,107,108,109,110,181,189 and 192.

2. Connected to rman recovery catalog schema(rman_cat) from this database and Identified the same TAG value used for all level 0 backup files,I consider the TAG value as “TEST_FULLDB_THU151013″. Also identified the require Media for these datafiles and controlfiles as HT0008,HT0014 and HT0015.Updated storage team to keep these Media into tape drive till this activity completion.

3. Logged into test database server and started the instance using nomount state with parameter values large_pool_size=500M and job_queue_processes=0.

4. Connected to RMAN utility with recovery catalog schema to restore the control files first.

RMAN> run
{
allocate channel t1 type ‘sbt_tape’;
send ‘NB_ORA_client=test’;
restore controlfile from tag ‘TEST_FULLDB_THU151013′;
release channel t1;
}

5. Mounted the database using sqlplus utility and disabled the block change tracking feature.

SQL> alter database mount;
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

6. Connected to RMAN utility again with recovery caatlog schema to restore the required tablespaces.

RMAN> run
{
allocate channel t1 type ‘sbt_tape’;
allocate channel t2 type ‘sbt_tape’;
allocate channel t3 type ‘sbt_tape’;
send ‘NB_ORA_client=test’;
restore tablespace system,undotbs1,sysaux,TEST_DETAIL_2011_TS from tag ‘TEST_FULLDB_THU151013′;
sql “alter database datafile 1,2,3,149,163,164,149,106,107,108,109,110,181,189,192 online”;
release channel t1;
release channel t2;
release channel t3;
}

7. Gathered the last archivelog sequence backed up on this level 0 backup from RMAN utility.For eg 272150.

8. Created rman command file named recover_db.rman as the list of tablespaces to be skipped was huge. Used  ’set until sequence’ clause to restore the required archive logs and recover the database. Here is the syntax used.

restore_db.rman
run
{
allocate channel t1 type ‘sbt_tape’;
allocate channel t2 type ‘sbt_tape’;
allocate channel t3 type ‘sbt_tape’;
send ‘NB_ORA_client=test’;
set until sequence 272151; ==> (Max sequence + 1)
recover database skip forever tablespace
ABC_IND_TS,ABC_DATA_TS,DEF_INDX_TS,DEF_REF_TS_01,DEF_REF_TS_01,ACCT_TAB_TS_01,ACCT_TAB_TS_02,
………………………………………………………………………………… ==> List of tablespaces other than 4 required
TPG_DATA_TS_01,TPG_IND_TS_01,USERS,XDB;
release channel t1;
release channel t2;
release channel t3;
}

9. Executed this command file using RMAN utility with recovery catalog schema.
RMAN> @restore_db.rman

10. Used sqlplus utility and opened the database using resetlogs option. Then converted this test database to run on noarchivelog mode.
SQL> recover database using backup controlfile until cancel;
cancel
SQL> alter database open resetlogs;
SQL> shutdown immediate
SQL> startup mount
SQL> alter database noarchivelog;
SQL> alter database open;

11. Confirmed the deleted partitions now exists with data on test database.
SQL> select count(1) from <table> partition(<part name>);

12. Handed over this test database and client DBA exported the partitions for year 2011 and imported into the production DW database.

So here we created a test database with the size of 420G instead of a whole database sized 5TB, which definitely saved time and space. Though there are much easier options available in any common production database environment (such as using restore from recyclebin or flashback standby database to point in time,) this method was really helpful when I couldn’t use those options.

Best Certificate Authority for Jar file signing in Oracle E-Business Suite

$
0
0

Most of you are already aware of the recent announcement on Steven Chan’s blog  about New JRE requirements that require EBS JAR files to be signed by a Code Signing Certificate. This requirement came in as Oracle is tightening up security around Java. Java is one of the most wildly exploited pieces of software by viruses and malware bots.

First note that code signing certificates are different from the SSL certificates which are used for web URLs.  Code signing certificates are used for sign files like Java JAR files, Windows kernel drivers, Windows program installation EXEs and ActiveX files. SSL certificates try to verify and establish a secure connection to a web host,  whereas code signing certs help users identify any piece of program. One might be wondering why doesn’t Oracle ship signed JAR files by default? Unfortunately Oracle cannot do that, as any java code related patch will overwrite them, and will require a new set of signed JAR files.

Let’s come back to topic of this blog — what is the best certificate authority to buy the code signing certificate? Technology behind Verisign $500 cert and Comodo $70 certificate is the same. The $500 certificate doesn’t do any extra magic — It might offer you some liability assurance, but the technology is the same.

I looked around and found that certs from StartSSL.com are the cheapest, costing around $59. Unfortunately, we cannot use them for JAR signing, as their root certificate is not yet included in cacerts that are shipped with JRE. StartSSL certs are included in windows 7, but not yet in Java.  To use StartSSL certs with java, we need to first manually import them into Java cacerts, which is a manual process that you better avoid. You can find list of all certificate authorities included in Java with below command.

$ pwd

/home/oracle/jre1.7.0_45/bin
$ ./keytool -list -keystore ../lib/security/cacerts -v |grep Issuer:
Enter keystore password: changeit

I went on with my search again to find out what was the least expensive and best way that is included in Java cacerts.  COMODO Code Signing certificates seem to be cheapest available, they can be picked from this reseller store for about $80 a year.  Going with a root certificate that is already included in java cacert file will avoid the need to manually import the root certificates in java on server as well as JRE on all client machines.

So COMODO seems to be the winner here!  For about $400 per 5 years, you can get a certificate that you can use in all your prod and dev/test environments. I am also working on steps to setup an internal Certificate Authority that you can use to sign the jar files for free, which is useful for Demo/LAB environments where user population is much less.  Currently working on resolving below error:

com.sun.deploy.security.RevocationChecker$StatusUnknownException: Certificate does not specify OCSP responder

See you in my next blog post! Happy Holidays!

Build An EBS 12.2.2 Sandbox Fast(ish) In Virtualbox from OVM templates

$
0
0

Last year, I wrote a blog post about installing Oracle E-Business Suite 12.1.3 in Virtualbox using templates designed for Oracle VM Server. It was a surprisingly popular post, and I enjoyed engaging with a lot of enthusiastic readers in the comments. With minimal OS configuration, no installer to run, and automated scripts provided by Oracle to create the instance, the appeal of having an “out-of-the-box” EBS Vision instance was huge, especially for non-Apps DBAs. The ability to deploy these templates in Virtualbox increases their accessibility for EBS technology enthusiasts who lack access to OVM server.

Well, the EBS 12.2 OVM templates are out, so it’s time to get busy with the new stuff! With a few small tweaks, easily illustrated in this blog post, you can have a working E-Business Suite 12.2.2 sandbox running in Virtualbox, without building your own servers from ground up, running an installer, or applying any patches.

What’s different this time?

If you’re familiar with the 12.1.3 version of these instructions, you’ll notice a few differences in this post:

  1. No need to download a new kernel to replace the Xen kernel used by OVM. The kernel you need is already included; you just need to tell the server how to load it. This simplifies our lives quite a bit.
  2. We’re building a single-node Vision instance this time, instead of multi-node. This was a pretty common request from readers of the 12.1.3 post, and I wanted to reduce the amount of VM resources I was using on my personal lab machine.
  3. The resource footprint of the VM has changed significantly, particularly the memory requirements. The introduction of WebLogic Server (WLS) to the E-Business Suite techstack, along with the duplicated apps tier filesystem required to support online patching, have bulked up an already-hefty Vision instance.
  4. You’ll notice the “One hour of work!” tagline is missing from this post. It’s probably still pretty close, since the long-running steps are hands-off, but I didn’t use a stopwatch this time.

Finally, of course, you’ll find that in general, the EBS 12.2 tech stack is a much different animal than 12.1. :)

Caveats

You really should not follow this guide if…
… you are an Apps DBA. Release 12.2 is very new. If you take this “shortcut,” you are depriving yourself of a lot of learning opportunities: new install processes, patching practice, etc.
… you have an underpowered test system (< 2CPU, < 8GB RAM). The specs I’m using for this blog post are the bare minimum. You might be tempted to cut corners, but you’ll be setting yourself up for pain later on.
… you don’t know your license status. The OVM templates (and EBS software in general) are not “free to install, free to learn” content like you’ll find on OTN. Tread carefully, and know what your support contract permits you to do!
… you need support from anyone (not even me, I can only point the way). OVM templates were not designed for use in VirtualBox, and furthermore, these templates were designed to be used in a multi-node (1 db tier, 1 app tier). This single-node Vision instance in VirtualBox is a double-Frankenstein job. Set your expectations accordingly.
… you have an OVM server at your disposal. Seriously, why jump through these hoops if you don’t have to?

Okay, let’s recap! No Apps DBAs, no underpowered test systems, no overpowered test systems, no support, no guarantees that it’ll even work. Now that I’ve eliminated 90% of my potential audience, let’s plow forward, shall we?

Ingredients

You will need:

  • Oracle VirtualBox
  • Lots of disk space. Final footprint will be about 325GB, but the intermediary file conversion steps require a lot of space. You should be fine with a 1 TB drive.
  • 8GB of memory. Minimum. WLS is a beast, baby! Rawr!
  • The previous requirement means that your host system needs to be capable of addressing that much memory, so a 32-bit host OS will not work here.
  • 2 CPU cores, minimum. This will be slow enough as it is, why add the additional delay of db and app tier fighting each other for a single CPU core?
  • OEL 5.x install media to act as a rescue boot disk. We don’t need to download a new kernel, but we still need to boot in rescue mode to tinker a little bit.
  • An understanding of some basic Linux systems administration tasks.
  • Familiarity with configuring storage and network options in Virtualbox.
  • Not strictly required, but a review of Note 1590941.1 would be a good idea. Most of the content is obviously set in an Oracle VM Server context, but there’s some useful information there about extracting the templates and using the configuration scripts.
  • Patience! Long downloads, file conversions…you’re looking at hours of waiting for things to complete.

Configuration steps

When following these instructions, note that some of the commands and output may appear to be truncated. Even if you don’t see a scrollbar, scroll right to get the rest of the content. Once I figure out a less awkward presentation method, I’ll update.

  1. Download the templates
    Connect to Oracle’s Software Delivery Cloud and download the files listed under Oracle VM Templates for Oracle E-Business Suite Release 12.2.2 Media Pack for x86 (64 bit)”.For this exercise, you’ll need to download “Oracle E-Business Suite Release 12.2.2 Application Tier Install X86 (64 bit), Parts 1-3 (Part numbers V41235-01 – V41237-01)” and “Oracle E-Business Suite Release 12.2.2 Vision Demo Database Tier Install X86 (64 bit), Parts 1-4 (Part numbers V41171-01 – V41173-01).” I also recommend clicking the “View Digest ” button near the top of the download page, and running md5sum on each of the downloaded zip files to make sure the checksums match that list.”

    If you want to confirm that you’re in the right place, Steven Chan’s post announcing the templates has a screenshot of the Edelivery page.

  2. Extract the templates
    Note: If you’re using Windows and don’t have a Unix-like shell environment like Cygwin available, you may have to translate some of these steps to their Windows-y equivalents. Since we’re just concatenating and uncompressing files, I will assume you can do that. :-)
    • Unzip all the files you just downloaded (unzip 'V41*.zip'). This will produce two sets of files: OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM.tgz.* (database tier template) and OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM.tgz.* (apps tier template).
    • Concatenate and uncompress the two sets of files:
       zathras:EBS122OVMTempl  time cat OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM.tgz* | tar xvz -C /Volumes/Epsilon3/EBS\ Software/EBS122OVMTempl
      x OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM/
      x OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM/vm.cfg
      x OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM/EBS.img 
      x OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM/System.img
      
      real	150m42.754s
      user	12m38.850s
      sys	6m37.271s
      
      zathras:EBS122OVMTempl jpiwowar$ time cat OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM.tgz.* | tar xvz -C '/Volumes/Epsilon3/EBS Software/EBS122OVMTempl'
      x OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM/
      x OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM/vm.cfg
      x OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM/EBS.img
      x OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM/System.img
      
      real	154m40.689s
      user	14m33.270s
      sys	6m42.356s
  3. Convert the disk image files to VDI format
    Use the vboxmanage command-line utility to convert the database System.img file and the EBS.img files from both tiers to VDI format. You’ll note that the final size of the apps tier VDI file is much smaller than its source. This is expected; the VDI files are dynamic, and only contain about 50GB of data despite having a max size of 250GB. Also note that we’re only converting the images we need; since this will be a single-node instance, we don’t need the apps tier System.img file.
     
    zathras:OVM_OL5U9_X86_64_EBIZ12.2_VIS_DB_PVM jpiwowar$ vboxmanage convertfromraw System.img /Volumes/Europa/OVM122/app/DBRoot.vdi; vboxmanage convertfromraw EBS.img /Volumes/Europa/OVM122/app/DBData.vdi
    Converting from raw image file="System.img" to file="/Volumes/Europa/OVM122/app/DBRoot.vdi"...
    Creating dynamic image with size 36284923904 bytes (34604MB)...
    Converting from raw image file="EBS.img" to file="/Volumes/Europa/OVM122/app/DBData.vdi"...
    Creating dynamic image with size 268435456000 bytes (256000MB)...
    
    zathras:OVM_OL5U9_X86_64_EBIZ12.2_APPS_PVM jpiwowar$ vboxmanage convertfromraw EBS.img /Volumes/Europa/OVM122/app/AppSoftware.vdi
    Converting from raw image file="EBS.img" to file="/Volumes/Europa/OVM122/app/AppSoftware.vdi"...
    Creating dynamic image with size 268435456000 bytes (256000MB)...
  4. Configure your VirtualBox VM
    Since we’re doing a single-node Vision install, we only need one VM. Here are the specs:
    • OS: Oracle Linux, 64-bit
    • CPUs: 2
    • Memory: 8GB (more if you have it available)
    • Device boot order: CD-ROM, Hard Disk
    • Storage: Attach Linux installation ISO to CD/DVD drive on the IDE controller and attach the three vdi files to the SATA controller, in the following order: DBRoot.vdi, DBData.vdi, AppSoftware.vdi
    • Network: At least one network interface, either Host-only or Bridged. If you choose Host-only, I recommend adding a second interface that uses NAT, so you can reach external (non-host) networks from your VM. If you use Bridged networking you don’t need this second interface.

    The screenshot below shows the final configuration of my VM:
    VM config details for 12.2.2 VIsion server

  5. Boot VM in rescue mode from the install CD (round 1)
    Enter “linux rescue” at the the boot: prompt to enter rescue mode:
    Rescue boot screen
    Select the keyboard and language preferences that suit you, and select “No” when asked about configuring network interfaces. We don’t need them at this stage.
    Decline network setup
    Select “Continue” at the “Rescue” screen to look for a Linux installation.
    Look for Linux installations
    If you attached your disks properly, you will get a “Duplicate Labels” warning. This is because both the Vision database and Apps software disks have the same LVM label (EBS). We will fix that in the next step. For now, keep going by selecting the “Reboot” option. DO NOT ACTUALLY RESTART YOUR VM FROM THE VIRTUALBOX MENU, JUST HIT ‘ENTER’. You’ll find that the “reboot button” does not in fact, reboot the VM. Take a moment to savor the irony, then move on.
    reboot-only option screen (not really)
    Select OK when you get the sad news about not finding a Linux installation:
    Final rescue screen before command prompt
    Now we can fix our pesky disk label problem, as shown below. Once complete, exit to reboot the VM so we can continue with the real work.
     When finished please exit from the shell and your system will reboot.
    
    sh-3.2# e2label /dev/sdb1
    EBS
    sh-3.2# e2label /dev/sdc1
    EBS
    sh-3.2# e2label /dev/sdc1 EBSAPPS
    sh-3.2# e2label /dev/sdc1
    EBSAPPS
    
    sh-3.2#  exit
  6. Boot VM in rescue mode from the install CD (round 2)
    Follow the same steps as above to boot in rescue mode. This time, the rescue boot should find a Linux installation, and give you the option to mount it. This is where we’ll make some adjustments to configure the server to use the correct kernel and mount both disks.
    • First, switch the root volume from from the rescue system to the OVM template
      Your system is mounted under the /mnt/sysimage directory.
      When finished please exit from the shell and your system will reboot.
      
      sh-3.2# chroot /mnt/sysimage
    • Then, adjust /etc/fstab to include a mount point for the Apps tier filesystem. Make sure you set the last field in the EBS and EBSAPPS lines to ’0′ to avoid the possibility of a lengthy fsck of those volumes during boot.
      sh-3.2# df -h
      Filesystem            Size  Used Avail Use% Mounted on
      /dev/sda2              18G  2.2G   15G  14% /
      /dev/sda1              99M   51M   44M  54% /boot
      /dev/sdb1             247G  184G   50G  79% /u01
      
      sh-3.2# cat /etc/fstab
      LABEL=/                 /                       ext3    defaults        1 1
      LABEL=/boot             /boot                   ext3    defaults        1 2
      tmpfs                   /dev/shm                tmpfs   defaults        0 0
      devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
      sysfs                   /sys                    sysfs   defaults        0 0
      proc                    /proc                   proc    defaults        0 0
      LABEL=SWAP-VM           swap                    swap    defaults        0 0
      LABEL=EBS               /u01                    ext3    defaults        1 2
      sh-3.2# cp /etc/fstab /etc/fstab.bak
      sh-3.2# vi /etc/fstab
      "/etc/fstab" 9L, 684C written
      sh-3.2# cat /etc/fstab
      LABEL=/                 /                       ext3    defaults        1 1
      LABEL=/boot             /boot                   ext3    defaults        1 2
      tmpfs                   /dev/shm                tmpfs   defaults        0 0
      devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
      sysfs                   /sys                    sysfs   defaults        0 0
      proc                    /proc                   proc    defaults        0 0
      LABEL=SWAP-VM           swap                    swap    defaults        0 0
      LABEL=EBS               /u01                    ext3    defaults        1 0
      LABEL=EBSAPPS           /u02                    ext3    defaults        1 0
      sh-3.2# mkdir /u02
      sh-3.2# mount /u02
      sh-3.2# df -h
      Filesystem            Size  Used Avail Use% Mounted on
      /dev/sda2              18G  2.2G   15G  14% /
      /dev/sda1              99M   51M   44M  54% /boot
      /dev/sdb1             247G  184G   50G  79% /u01
      /dev/sdc1             247G   53G  182G  23% /u02
    • Finally, run mkinitrd make the system use the already-installed non-Xen kernel (make sure you get the whole command here, it may be truncated on the page).
       
      sh-3.2# mkinitrd -v -f /boot/initrd-2.6.39-300.17.2.el5uek.img 2.6.39-300.17.2.el5uek 
      Creating initramfs
      Modulefile is /etc/modprobe.conf
      Looking for deps of module ehci-hcd
      Looking for deps of module ohci-hcd
      Looking for deps of module uhci-hcd
      Looking for deps of module ext3: mbcache jbd
      Looking for deps of module mbcache
      Looking for deps of module jbd
      Found root device sda2 for LABEL=/
      Looking for driver for device sda2
      Looking for deps of module pci:v00008086d00002829sv00000000sd00000000bc01sc06i01: libahci ahci libahci ahci
      Looking for deps of module libahci
      Looking for deps of module ahci: libahci
      Looking for deps of module iscsi_tcp: scsi_transport_iscsi libiscsi libiscsi_tcp
      Looking for deps of module scsi_transport_iscsi
      Looking for deps of module libiscsi: scsi_transport_iscsi
      Looking for deps of module libiscsi_tcp: scsi_transport_iscsi libiscsi
      Looking for deps of module sr_mod: cdrom
      Looking for deps of module cdrom
      Looking for deps of module sd_mod: crc-t10dif
      Looking for deps of module crc-t10dif
      Looking for driver for device sda3
      Looking for deps of module pci:v00008086d00002829sv00000000sd00000000bc01sc06i01: libahci ahci libahci ahci
      Looking for deps of module xenblk: xen-blkfront
      Looking for deps of module xen-blkfront
      Looking for deps of module ide-disk
      Looking for deps of module dm-mem-cache
      Looking for deps of module dm-region_hash: dm-mod dm-log dm-region-hash
      Looking for deps of module dm-mod
      Looking for deps of module dm-log: dm-mod
      Looking for deps of module dm-region-hash: dm-mod dm-log
      Looking for deps of module dm-message
      Looking for deps of module dm-raid45
      Using modules:   /lib/modules/2.6.39-300.17.2.el5uek/kernel/fs/mbcache.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/fs/jbd/jbd.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/fs/ext3/ext3.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/ata/libahci.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/ata/ahci.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/scsi_transport_iscsi.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/libiscsi.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/libiscsi_tcp.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/iscsi_tcp.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/cdrom/cdrom.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/sr_mod.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/lib/crc-t10dif.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/sd_mod.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/block/xen-blkfront.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/md/dm-mod.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/md/dm-log.ko /lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/md/dm-region-hash.ko
      /sbin/nash -> /tmp/initrd.qO1496/bin/nash
      /sbin/insmod.static -> /tmp/initrd.qO1496/bin/insmod
      /etc/udev/rules.d/05-udev-early.rules -> /tmp/initrd.qO1496/etc/udev/rules.d/05-udev-early.rules
      /sbin/firmware_helper.static -> /tmp/initrd.qO1496/sbin/firmware_helper
      /sbin/udevd.static -> /tmp/initrd.qO1496/sbin/udevd
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/fs/mbcache.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/mbcache.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/fs/jbd/jbd.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/jbd.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/fs/ext3/ext3.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/ext3.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/ata/libahci.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/libahci.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/ata/ahci.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/ahci.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/scsi_transport_iscsi.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/scsi_transport_iscsi.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/libiscsi.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/libiscsi.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/libiscsi_tcp.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/libiscsi_tcp.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/iscsi_tcp.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/iscsi_tcp.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/cdrom/cdrom.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/cdrom.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/sr_mod.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/sr_mod.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/lib/crc-t10dif.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/crc-t10dif.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/scsi/sd_mod.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/sd_mod.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/block/xen-blkfront.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/xen-blkfront.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/md/dm-mod.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/dm-mod.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/md/dm-log.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/dm-log.ko' [elf64-x86-64]
      copy from `/lib/modules/2.6.39-300.17.2.el5uek/kernel/drivers/md/dm-region-hash.ko' [elf64-x86-64] to `/tmp/initrd.qO1496/lib/dm-region-hash.ko' [elf64-x86-64]
      /sbin/dmraid.static -> /tmp/initrd.qO1496/bin/dmraid
      /sbin/kpartx.static -> /tmp/initrd.qO1496/bin/kpartx
      Adding module mbcache
      Adding module jbd
      Adding module ext3
      Adding module libahci
      Adding module ahci
      Adding module scsi_transport_iscsi
      Adding module libiscsi
      Adding module libiscsi_tcp
      Adding module iscsi_tcp
      Adding module cdrom
      Adding module sr_mod
      Adding module crc-t10dif
      Adding module sd_mod
      Adding module xen-blkfront
      Adding module dm-mod
      Adding module dm-log
      Adding module dm-region-hash
      sh-3.2#
  7. Remove the rescue DVD from the VM (menu navigation: Devices->CD/DVD Devices->Remove disk from virtual drive) and exit to reboot the VM. Otherwise, it’ll boot from the CD again. If you get a warning about the the drive being in use, choose the “force unmount” option; this won’t hurt anything.
    sh-3.2# exit
    exit
    sh-3.2# exit
  8. Network configuration
    When the VM reboots, it will run a script included with the OVM template that performs one-time configuration of the network and sets up the 12.2 Vision database. Make sure you choose an IP address which is compatible with your VM’s primary network interface. In my case, that’s the host-only network vboxnet0, 192.168.56.0/255.255.255.0.

    IMPORTANT: Before you enter any network configuration info, now would be an excellent time to take a snapshot of your VM (menu navigation: Machine->Take Snapshot). Once network configuration completes, the script will roll directly into configuring the database tier, and if anything breaks during that process, you might have to re-extract and re-convert the template files again. That’s not a recipe for fun times. No, I totally didn’t (re)-learn this the hard way, why do you ask? ;-)

    Answer the questions presented by the network configuration script. Unless you’re doing something a bit weird, your answers should be similar to mine. Remember the IP, domain, and hostname you choose; you’ll need them later.

                    Welcome to Oracle Linux Oracle Linux Server release 5.9
                    Press 'I' to enter interactive startup.
    Starting udev: [  OK  ]
    Loading default keymap (us): [  OK  ]
    [ Some startup output snipped to improve readability.]
    Starting oraclevm-template...
    Regenerating SSH host keys.
    Stopping sshd: [  OK  ]
    Generating SSH1 RSA host key: [  OK  ]
    Generating SSH2 RSA host key: [  OK  ]
    Generating SSH2 DSA host key: [  OK  ]
    Starting sshd: [  OK  ]
    =======================================
    Configuring Oracle E-Business Suite...
    =======================================
    =======================================
            Configuring the Network...
    =======================================
    Configuring the Network Interactively
    
    Configuring network interface.
      Network device: eth0
      Hardware address: 08:00:27:75:30:B6
    
      Do you want to enable dynamic IP configuration (DHCP) (Y|n)?n
    
      Enter static IP address: 192.168.56.58
      Enter netmask: [255.255.255.0] [enter to take default]
      Enter gateway: 192.168.56.1
      Enter DNS server: 8.8.8.8
    
      Shutting down interface eth0:  [  OK  ]
      Shutting down interface eth1:  [  OK  ]
      Shutting down loopback interface:  [  OK  ]
    
      Configuring network settings.
        IP configuration: Static IP address
    
      Bringing up loopback interface:  [  OK  ]
      Bringing up interface eth0:  [  OK  ]
      Bringing up interface eth1:
      Determining IP information for eth1... done.
      [  OK  ]
    
      Enter hostname (e.g, host.example.com): coriana6.local.org
    
      Network configuration changed successfully.
          IP configuration: Static IP address
          IP address:       192.168.56.58
          Netmask:          255.255.255.0
          Gateway:          10.0.3.2
          DNS server:       10.0.3.2
          Hostname:         coriana6.local.org
      =======================================
              Disabling the Linux Firewall...
      =======================================
  9. Configuration of db tier
    After network configuration completes, the template deployment script will invoke Rapid Clone to configure the database tier. All you need to supply at the beginning is the desired SID for the database. Remember that for later, too. :-) When RapidClone completes, you will be prompted to select new passwords for the root and oracle users.
    ==================================================
     Prepare the Pairs File for Database Tier Clone...
    ===================================================
    
    ORACLE_SID is not set in the Pairs File
    Enter the Oracle Database SID :VIS122
    ==========================================================
                    Adding User oracle
    ==========================================================
    ==========================================================
                    Starting DB Tier configuration
    ==========================================================
    Parameters Used for this Configuration...
    The Pairs File :/u01/scripts/inst_db_pairs.txt
    The Source context file used  :/u01/install/11.2.0/appsutil/clone/context/db/CTXORIG.xml
    The Target context file       :/u01/install/11.2.0/appsutil/VIS122_coriana6.xml
    ==========================================================
     The Configuration Used to Create this VM...
     The Oracle E-Business Suite DBSID : VIS122
     The Oracle E-Business Suite DB HostName : %HOSTNAME_NODOMAIN%
     The Oracle E-Business Suite DB Domain Name :%DOMAIN%
     The Oracle E-Business Suite DB TNS Port :1521
    ==========================================================
    ==========================================================
            Cloning the DB Tier Context File
    ==========================================================
    
                         Copyright (c) 2011 Oracle Corporation
                            Redwood Shores, California, USA
    
                            Oracle E-Business Suite Rapid Clone
    
                                     Version 12.2
    
                          adclonectx Version 120.30.12020000.4
    
    Running:
    /u01/install/11.2.0/appsutil/clone/bin/../jre/bin/java -Xmx600M -classpath /u01/install/11.2.0/appsutil/clone/bin/../jlib/ojdbc5.jar:/u01/install/11.2.0/appsutil/clone/bin/../jlib/xmlparserv2.jar:/u01/install/11.2.0/appsutil/clone/bin/../jlib/java oracle.apps.ad.context.CloneContext  -e /u01/install/11.2.0/appsutil/clone/context/db/CTXORIG.xml -pairsfile /u01/scripts/inst_db_pairs.txt -out /u01/install/11.2.0/appsutil/VIS122_coriana6.xml -noprompt
    
    Log file located at /u01/install/11.2.0/appsutil/log/CloneContext_1219031841.log
    Report file located at /u01/install/11.2.0/appsutil/temp/portpool.lst
    Complete port information available at /u01/install/11.2.0/appsutil/temp/portpool.lst
    
    Creating the new Database Context file from :
      /u01/install/11.2.0/appsutil/clone/context/db/adxdbctx.tmp
    
    The new database context file has been created :
      /u01/install/11.2.0/appsutil/VIS122_coriana6.xml
    
    Log file located at /u01/install/11.2.0/appsutil/log/CloneContext_1219031841.log
    contextfile=/u01/install/11.2.0/appsutil/VIS122_coriana6.xml
    Check Clone Context logfile /u01/install/11.2.0/appsutil/log/CloneContext_1219031841.log for details.
    Executing adcfgclone.pl on the Database Tier
    
                         Copyright (c) 2011 Oracle Corporation
                            Redwood Shores, California, USA
    
                            Oracle E-Business Suite Rapid Clone
    
                                     Version 12.2
    
                          adcfgclone Version 120.63.12020000.22
    stty: standard input: Inappropriate ioctl for device
    
    Enter the APPS password :
    stty: standard input: Inappropriate ioctl for device
    
    Running Rapid Clone with command:
    Running:
    perl /u01/install/11.2.0/appsutil/clone/bin/adclone.pl java=/u01/install/11.2.0/appsutil/clone/bin/../jre mode=apply stage=/u01/install/11.2.0/appsutil/clone component=dbTier method=CUSTOM dbctxtg=/u01/install/11.2.0/appsutil/VIS122_coriana6.xml showProgress contextValidated=false
    
    Beginning database tier Apply - Thu Dec 19 03:18:45 2013
    
    /u01/install/11.2.0/appsutil/clone/bin/../jre/bin/java -Xmx600M -DCONTEXT_VALIDATED=false -Doracle.installer.oui_loc=/u01/install/11.2.0/oui -classpath /u01/install/11.2.0/appsutil/clone/jlib/xmlparserv2.jar:/u01/install/11.2.0/appsutil/clone/jlib/ojdbc6.jar:/u01/install/11.2.0/appsutil/clone/jlib/java:/u01/install/11.2.0/appsutil/clone/jlib/oui/OraInstaller.jar:/u01/install/11.2.0/appsutil/clone/jlib/oui/ewt3.jar:/u01/install/11.2.0/appsutil/clone/jlib/oui/share.jar:/u01/install/11.2.0/appsutil/clone/jlib/oui/srvm.jar:/u01/install/11.2.0/appsutil/clone/jlib/ojmisc.jar   oracle.apps.ad.clone.ApplyDBTier -e /u01/install/11.2.0/appsutil/VIS122_coriana6.xml -stage /u01/install/11.2.0/appsutil/clone   -showProgress
    APPS Password : Log file located at /u01/install/11.2.0/appsutil/log/VIS122_coriana6/ApplyDBTier_12190318.log
      |      0% completed
    Log file located at /u01/install/11.2.0/appsutil/log/VIS122_coriana6/ApplyDBTier_12190318.log 
    Completed Apply...
    Thu Dec 19 03:29:35 2013
    
    Starting database listener for VIS122:
    Running:
    /u01/install/11.2.0/appsutil/scripts/VIS122_coriana6/addlnctl.sh start VIS122
    Logfile: /u01/install/11.2.0/appsutil/log/VIS122_coriana6/addlnctl.txt
    
    You are running addlnctl.sh version 120.4
    
    Starting listener process VIS122 ...
    
    Listener VIS122 has already been started.
    
    addlnctl.sh: exiting with status 0
    
    addlnctl.sh: check the logfile /u01/install/11.2.0/appsutil/log/VIS122_coriana6/addlnctl.txt for more information ...
    
    Cloning the DB Tier Completed Successfully
    Logfile: /u01/install/11.2.0/appsutil/log/VIS122_coriana6/addlnctl.txt
    
    You are running addlnctl.sh version 120.4
    
    Shutting down listener process VIS122 ...
    
    LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-DEC-2013 03:29:35
    
    Copyright (c) 1991, 2011, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=coriana6.local.org)(PORT=1521)))
    The command completed successfully
    
    addlnctl.sh: exiting with status 0
    
    addlnctl.sh: check the logfile /u01/install/11.2.0/appsutil/log/VIS122_coriana6/addlnctl.txt for more information ...
    
    You are running addbctl.sh version 120.1
    
    Shutting down database VIS122 ...
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Dec 19 03:29:35 2013
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Connected.
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    addbctl.sh: exiting with status 0
    
    ==========================================================
          Enabling the DB Service Startup on Boot
    ==========================================================
    ==========================================================
          Starting the Oracle E-Business Suite DB Tier Services
    ==========================================================
    Logfile: /u01/install/11.2.0/appsutil/log/VIS122_coriana6/addlnctl.txt
    
    You are running addlnctl.sh version 120.4
    
    Starting listener process VIS122 ...
    
    LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-DEC-2013 03:29:44
    
    Copyright (c) 1991, 2011, Oracle.  All rights reserved.
    
    Starting /u01/install/11.2.0/bin/tnslsnr: please wait...
    
    TNSLSNR for Linux: Version 11.2.0.3.0 - Production
    System parameter file is /u01/install/11.2.0/network/admin/VIS122_coriana6/listener.ora
    Log messages written to /u01/install/11.2.0/admin/VIS122_coriana6/diag/tnslsnr/coriana6/vis122/alert/log.xml
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=coriana6.local.org)(PORT=1521)))
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=coriana6.local.org)(PORT=1521)))
    STATUS of the LISTENER
    ------------------------
    Alias                     VIS122
    Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
    Start Date                19-DEC-2013 03:29:45
    Uptime                    0 days 0 hr. 0 min. 0 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/install/11.2.0/network/admin/VIS122_coriana6/listener.ora
    Listener Log File         /u01/install/11.2.0/admin/VIS122_coriana6/diag/tnslsnr/coriana6/vis122/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=coriana6.local.org)(PORT=1521)))
    Services Summary...
    Service "VIS122" has 1 instance(s).
      Instance "VIS122", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully
    
    addlnctl.sh: exiting with status 0
    
    addlnctl.sh: check the logfile /u01/install/11.2.0/appsutil/log/VIS122_coriana6/addlnctl.txt for more information ...
    
    You are running addbctl.sh version 120.1
    
    Starting the database VIS122 ...
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Dec 19 03:29:45 2013
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Connected to an idle instance.
    ORACLE instance started.
    
    Total System Global Area 2137886720 bytes
    Fixed Size                  2230072 bytes
    Variable Size             452987080 bytes
    Database Buffers         1660944384 bytes
    Redo Buffers               21725184 bytes
    Database mounted.
    Database opened.
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    addbctl.sh: exiting with status 0
    
    ==========================================================
          Changing passwords for the default users
    ==========================================================
    Changing password for user oracle.
    New UNIX password:
    BAD PASSWORD: it is based on a dictionary word
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully.
    Changing password for user root.
    New UNIX password:
    BAD PASSWORD: it is based on your username
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully.
    =====================INSTALLATION SUMMARY============
    
    The Oracle E-Business Suite DB HostName : coriana6.local.org
    The Base Installation Directory         : /u01/install
    The Oracle Home Location                : /u01/install/11.2.0
    The Oracle E-Business Suite Data File Dir : /u01/install/data
    The Oracle E-Business Suite DBSID       : VIS122
    The Oracle E-Business Suite DB TNS Port : 1521
    ==========================================================
    
    Will continue in 10 seconds, or press any key to continue...
    Template configuration disabled.
     [  OK  ]
  10. Prepare the server for apps configuration
    Before we can run the configuration script to configure the apps tier of the instance, we need to make a few adjustments. Please note: most of these steps will not be necessary if you’re trying to create a two-node instance; they’re only required because we’re trying to cram two nodes into one server.
    • Log in as root and adjust permissions on the ping executable. Otherwise, you may encounter issues with concurrent manager startup. This would probably not be necessary if we were using the Apps tier template’s root disk.
      coriana6.local.org login: root
      Password:
      Last login: Wed Oct 16 03:42:27 on tty1
      [root@coriana6 ~]# ls -l /bin/ping
      -rwxr-xr-x 1 root root 37312 Jul  2  2009 /bin/ping
      [root@coriana6 ~]# chmod u+s /bin/ping
    • Next, we need to change the configuration scripts and associated files to use the new mount point for the apps tier filesystems (/u01 is expected by default), and to disable the network configuration steps (since we aren’t running the script on a new server, we don’t need it):
       
      [root@coriana6 ~]# cd /u02/install/scripts
      [root@coriana6 scripts]# ls
      apps_pairs.txt  config.sh    inst_apps_pairs.txt  stopapps.sh
      cleanup.sh      ebizapps.rc  startapps.sh
      [root@coriana6 scripts]# perl -pi.old -e 's/u01/u02/g' *
      [root@coriana6 scripts]# perl -pi.nonet -e 's/ovm_configure_network/echo "Skip ovm_configure_network"/g' config.sh
      [root@coriana6 scripts]# diff config.sh config.sh.nonet
      79c79
       # Call ovm_configure_network function which is part of JeOS function library
      115c115
            ovm_configure_network
      130c130
               ovm_configure_network "$IP_ADDR" "$NET_MASK" "$GATEWAY" "$DNS_HOST" "$HOST_NAME"
  11. Execute the apps tier config script
    Now, we’re finally ready to go! Execute the config.sh script (still connected as root), and answer the prompts with the values from the configuration of the database tier. I’ve left the output from my configuration run completely intact as a reference, but the only user inputs required are at the beginning of the process.Please note: Execution of this script will take quite a long time. It took at least an hour on my modest test system. Rapid Clone has to configure both the Run and Patch filesystems, and starts up the application services at the end. There is significant overhead involved in the multiple stops and starts of the WLS components during this process. This would not be as painful on a properly-sized system of course.
     
    [root@coriana6 scripts]# ./config.sh
    =======================================
    Configuring Oracle E-Business Suite...
    =======================================
    =======================================
          Configuring the Network...
    =======================================
    Configuring the Network Interactively
    Skip ovm_configure_network
    =======================================
          Disabling the Linux Firewall...
    =======================================
    ==========================================================
                  Adding User oracle
    ==========================================================
    ./config.sh: line 165: [: missing `]'
    ==================================================
     Prepare the Pairs File for Application Tier Clone...
    ===================================================
    ==================================================
     Checking if the FQDN is > 30 chars...
    ===================================================
    
    The FQDN is less than 30 characters. Proceeding with the configuration
    Database Tier Information is not set in the Pairs File
    Prompting the user for Database Tier Information
    
    Enter the Database Tier Host Name (without the domain) :coriana6
    
    Enter the Database Tier Domain Name :local.org
    
    Enter the Oracle Database SID :VIS122
    
    Enter the TNS Listener Port Number :1521
    ==========================================================
                  Starting Apps Tier configuration
    ==========================================================
    Parameters Used for this Configuration...
    The Pairs File :/u02/install/scripts/inst_apps_pairs.txt
    The Source context file used  :/u02/install/fs1/EBSapps/comn/clone/context/apps/CTXORIG.xml
    The Target context file       :/u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml
    ==========================================================
                  Checking for the DB Host and Database
    ==========================================================
    
    Pinging the Database Host coriana6.local.org...
    
    The Database Host coriana6.local.org seem to be up
    Proceeding with rest of the configuration...
    Connected to Database VIS122 on coriana6...
    ==========================================================
          Configuring the VM as a new Application Tier Node
    ==========================================================
    ==========================================================
          Configuring the Application Tier File System (fs1)
    ==========================================================
    ==========================================================
          Cloning the Application Tier Context File
    ==========================================================
     Executing the command su oracle -c echo apps|./adclonectx.pl contextfile=/u02/install/fs1/EBSapps/comn/clone/context/apps/CTXORIG.xml pairsfile=/u02/install/scripts/inst_apps_pairs.txt outfile=/u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml nopromptmsg
    
                         Copyright (c) 2011 Oracle Corporation
                            Redwood Shores, California, USA
    
                            Oracle E-Business Suite Rapid Clone
    
                                     Version 12.2
    
                          adclonectx Version 120.30.12020000.4
    
    Running:
    /u02/install/fs1/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs1/EBSapps/comn/clone/bin/../jlib/ojdbc5.jar:/u02/install/fs1/EBSapps/comn/clone/bin/../jlib/xmlparserv2.jar:/u02/install/fs1/EBSapps/comn/clone/bin/../jlib/java oracle.apps.ad.context.CloneContext  -e /u02/install/fs1/EBSapps/comn/clone/context/apps/CTXORIG.xml -pairsfile /u02/install/scripts/inst_apps_pairs.txt -out /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml -noprompt
    
    Log file located at /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/log/CloneContext_1219072110.log
    
    Target System Base Directory set to /u02/install
    
    Target System Current File System Base set to /u02/install/fs1
    
    Target System Other File System Base set to /u02/install/fs2
    
    Target System Fusion Middleware Home set to /u02/install/fs1/FMW_Home
    
    Target System Web Oracle Home set to /u02/install/fs1/FMW_Home/webtier
    
    Target System Appl TOP set to /u02/install/fs1/EBSapps/appl
    
    Target System COMMON TOP set to /u02/install/fs1/EBSapps/comn
    
    Target System Instance Top set to /u02/install/fs1/inst/apps/VIS122_coriana6
    Report file located at /u02/install/fs1/inst/apps/VIS122_coriana6/admin/out/portpool.lst
    Complete port information available at /u02/install/fs1/inst/apps/VIS122_coriana6/admin/out/portpool.lst
    
    Creating the new APPL_TOP Context file from :
      /u02/install/fs1/EBSapps/comn/clone/context/apps/adxmlctx.tmp
    
    The new APPL_TOP context file has been created :
      /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml
    
    Log file located at /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/log/CloneContext_1219072110.log
    contextfile=/u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml
    Check Clone Context logfile /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/log/CloneContext_1219072110.log for details.
     Executing the command perl adcfgclone.pl appsTier /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml
    
                         Copyright (c) 2011 Oracle Corporation
                            Redwood Shores, California, USA
    
                            Oracle E-Business Suite Rapid Clone
    
                                     Version 12.2
    
                          adcfgclone Version 120.63.12020000.22
    stty: standard input: Inappropriate ioctl for device
    
    Enter the APPS password :
    stty: standard input: Inappropriate ioctl for device
    Running:
    /u02/install/fs1/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs1/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/java:/u02/install/fs1/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.clone.util.OPWrapper -encryptpwd /u02/install/fs1/EBSapps/comn/clone/bin/../FMW/tempinfoApps.txt
    stty: standard input: Inappropriate ioctl for device
    
    Enter the Weblogic AdminServer password :
    stty: standard input: Inappropriate ioctl for device
    Running:
    /u02/install/fs1/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs1/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/java:/u02/install/fs1/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.clone.util.OPWrapper /u02/install/fs1/EBSapps/comn/clone/bin/../FMW/tempinfo.txt
    Running:
    /u02/install/fs1/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs1/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/java:/u02/install/fs1/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.clone.util.OPWrapper /u02/install/fs1/EBSapps/comn/clone/bin/../FMW/EBSDataSource
    
    Running Rapid Clone with command:
    Running:
    perl /u02/install/fs1/EBSapps/comn/clone/bin/adclone.pl java=/u02/install/fs1/EBSapps/comn/clone/bin/../jre mode=apply stage=/u02/install/fs1/EBSapps/comn/clone component=appsTier method=CUSTOM appctxtg=/u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml showProgress contextValidated=false
    
    FMW Pre-requisite check log file location : /u02/install/fs1/EBSapps/comn/clone/FMW/logs/prereqcheck.log
    
    Running: /u02/install/fs1/EBSapps/comn/clone/FMW/t2pjdk/bin/java -classpath /u02/install/fs1/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/engine.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraPrereq.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraPrereqChecks.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraInstaller.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraInstallerNet.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/srvm.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/ojdl.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/ojdl2.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/ojdl-log4j.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/xmlparserv2.jar:/u02/install/fs1/EBSapps/comn/clone/prereq/webtier/oui/jlib/share.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/java oracle.apps.ad.clone.util.FMWOracleHomePreReqCheck -prereqCheckFMW -e /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml -stage /u02/install/fs1/EBSapps/comn/clone -log /u02/install/fs1/EBSapps/comn/clone/FMW/logs/prereqcheck.log
    
    Beginning application tier Apply - Thu Dec 19 07:21:38 2013
    
    /u02/install/fs1/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -DCONTEXT_VALIDATED=false -Doracle.installer.oui_loc=/oui -classpath /u02/install/fs1/EBSapps/comn/clone/jlib/xmlparserv2.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/ojdbc6.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/java:/u02/install/fs1/EBSapps/comn/clone/jlib/oui/OraInstaller.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/oui/ewt3.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/oui/share.jar:/u02/install/fs1/FMW_Home/webtier/../Oracle_EBS-app1/oui/jlib/srvm.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs1/FMW_Home/wlserver_10.3/server/lib/weblogic.jar:/u02/install/fs1/EBSapps/comn/clone/jlib/obfuscatepassword.jar  oracle.apps.ad.clone.ApplyAppsTier -e /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml -stage /u02/install/fs1/EBSapps/comn/clone    -showProgress -nopromptmsg
    Log file located at /u02/install/fs1/inst/apps/VIS122_coriana6/admin/log/ApplyAppsTier_12190721.log
      -      0% completed
    Log file located at /u02/install/fs1/inst/apps/VIS122_coriana6/admin/log/ApplyAppsTier_12190721.log
      \    100% completed
    
    Completed Apply...
    Thu Dec 19 07:49:40 2013
    
     Executing command: /u02/install/fs1/EBSapps/10.1.2/bin/sqlplus @/u02/install/fs1/EBSapps/appl/ad/12.0.0/patch/115/sql/truncate_ad_nodes_config_status.sql
    
    Do you want to startup the Application Services for VIS122? (y/n) [n] :
    Services not started
    
    ==========================================================
          Configuring the Application Tier File System (fs2)
    ==========================================================
    Copying the Application Tier File System from fs1 to fs2
    Executing the command su oracle -c perl adcfgclone.pl appsTier
    
                         Copyright (c) 2011 Oracle Corporation
                            Redwood Shores, California, USA
    
                            Oracle E-Business Suite Rapid Clone
    
                                     Version 12.2
    
                          adcfgclone Version 120.63.12020000.22
    stty: standard input: Inappropriate ioctl for device
    
    Enter the APPS password :
    stty: standard input: Inappropriate ioctl for device
    Running:
    /u02/install/fs2/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs2/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/java:/u02/install/fs2/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.clone.util.OPWrapper -encryptpwd /u02/install/fs2/EBSapps/comn/clone/bin/../FMW/tempinfoApps.txt
    stty: standard input: Inappropriate ioctl for device
    
    Enter the Weblogic AdminServer password :
    stty: standard input: Inappropriate ioctl for device
    Running:
    /u02/install/fs2/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs2/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/java:/u02/install/fs2/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.clone.util.OPWrapper /u02/install/fs2/EBSapps/comn/clone/bin/../FMW/tempinfo.txt
    Running:
    /u02/install/fs2/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -classpath /u02/install/fs2/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/java:/u02/install/fs2/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.clone.util.OPWrapper /u02/install/fs2/EBSapps/comn/clone/bin/../FMW/EBSDataSource
    
    Do you want to add a node (yes/no) [no] :
    
    Running:
    /u02/install/fs2/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -cp /u02/install/fs2/EBSapps/comn/clone/jlib/java:/u02/install/fs2/EBSapps/comn/clone/jlib/xmlparserv2.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojdbc5.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/obfuscatepassword.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/java:/u02/install/fs2/EBSapps/comn/clone/jlib/emCfg.jar oracle.apps.ad.context.CloneContext -e /u02/install/fs2/EBSapps/comn/clone/bin/../context/apps/CTXORIG.xml -validate -pairsfile /tmp/adpairsfile_26497.lst -stage /u02/install/fs2/EBSapps/comn/clone  2> /tmp/adcfgclone_26497.err; echo $? > /tmp/adcfgclone_26497.res
    
    Log file located at /u02/install/fs2/EBSapps/comn/clone/bin/CloneContext_1219075100.log
    
    Target System File Edition type [run] :
    Enter the full path of Run File System Context file :
    Provide the values required for creation of the new APPL_TOP Context file.
    
    Target System Fusion Middleware Home set to /u02/install/fs2/FMW_Home
    
    Target System Web Oracle Home set to /u02/install/fs2/FMW_Home/webtier
    
    Target System Appl TOP set to /u02/install/fs2/EBSapps/appl
    
    Target System COMMON TOP set to /u02/install/fs2/EBSapps/comn
    
    Target System Instance Top set to /u02/install/fs2/inst/apps/VIS122_coriana6
    
    Target System Port Pool [0-99] :
    Checking the port pool 1
    done: Port Pool 1 is free
    Report file located at /u02/install/fs2/inst/apps/VIS122_coriana6/admin/out/portpool.lst
    Complete port information available at /u02/install/fs2/inst/apps/VIS122_coriana6/admin/out/portpool.lst
    
    UTL_FILE_DIR on database tier consists of the following directories.
    
    1. /usr/tmp
    2. /usr/tmp
    3. /u01/install/11.2.0/appsutil/outbound/VIS122_coriana6
    4. /usr/tmp
    Choose a value which will be set as APPLPTMP value on the target node [1] : RC-00208: Error: Not a valid number
    
    UTL_FILE_DIR on database tier consists of the following directories.
    
    1. /usr/tmp
    2. /usr/tmp
    3. /u01/install/11.2.0/appsutil/outbound/VIS122_coriana6
    4. /usr/tmp
    Choose a value which will be set as APPLPTMP value on the target node [1] : RC-00200: Error: Exception occurred while taking input from user
    
    Creating the new APPL_TOP Context file from :
      /u02/install/fs2/EBSapps/comn/clone/context/apps/adxmlctx.tmp
    
    The new APPL_TOP context file has been created :
      /u02/install/fs2/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml
    
    Log file located at /u02/install/fs2/EBSapps/comn/clone/bin/CloneContext_1219075100.log
    Check Clone Context logfile /u02/install/fs2/EBSapps/comn/clone/bin/CloneContext_1219075100.log for details.
    
    Running Rapid Clone with command:
    Running:
    perl /u02/install/fs2/EBSapps/comn/clone/bin/adclone.pl java=/u02/install/fs2/EBSapps/comn/clone/bin/../jre mode=apply stage=/u02/install/fs2/EBSapps/comn/clone component=appsTier method=CUSTOM appctxtg=/u02/install/fs2/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml showProgress contextValidated=true
    
    FMW Pre-requisite check log file location : /u02/install/fs2/EBSapps/comn/clone/FMW/logs/prereqcheck.log
    
    Running: /u02/install/fs2/EBSapps/comn/clone/FMW/t2pjdk/bin/java -classpath /u02/install/fs2/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/engine.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraPrereq.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraPrereqChecks.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraInstaller.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/OraInstallerNet.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/srvm.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/ojdl.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/ojdl2.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/Scripts/ext/jlib/ojdl-log4j.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/xmlparserv2.jar:/u02/install/fs2/EBSapps/comn/clone/prereq/webtier/oui/jlib/share.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/java oracle.apps.ad.clone.util.FMWOracleHomePreReqCheck -prereqCheckFMW -e /u02/install/fs2/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml -stage /u02/install/fs2/EBSapps/comn/clone -log /u02/install/fs2/EBSapps/comn/clone/FMW/logs/prereqcheck.log
    
    Beginning application tier Apply - Thu Dec 19 07:51:20 2013
    
    /u02/install/fs2/EBSapps/comn/clone/bin/../jre/bin/java -Xmx600M -DCONTEXT_VALIDATED=true -Doracle.installer.oui_loc=/oui -classpath /u02/install/fs2/EBSapps/comn/clone/jlib/xmlparserv2.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojdbc6.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/java:/u02/install/fs2/EBSapps/comn/clone/jlib/oui/OraInstaller.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/oui/ewt3.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/oui/share.jar:/u02/install/fs2/FMW_Home/webtier/../Oracle_EBS-app1/oui/jlib/srvm.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/ojmisc.jar:/u02/install/fs2/FMW_Home/wlserver_10.3/server/lib/weblogic.jar:/u02/install/fs2/EBSapps/comn/clone/jlib/obfuscatepassword.jar  oracle.apps.ad.clone.ApplyAppsTier -e /u02/install/fs2/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml -stage /u02/install/fs2/EBSapps/comn/clone    -showProgress -nopromptmsg
    Log file located at /u02/install/fs2/inst/apps/VIS122_coriana6/admin/log/ApplyAppsTier_12190751.log
      |    100% completed
    
    Completed Apply...
    Thu Dec 19 08:15:33 2013
    
    Looking for incomplete CLONE record in ad_adop_session_patches table
    
    The CLONE record status is no rows selected
    
    Updating incomplete CLONE record to COMPLETED
    ==========================================================
          Enabling the Apps Tier Service Startup on Boot
    ==========================================================
    ==========================================================
          Enabling the NFS service for shared file system setup
    ==========================================================
    ==========================================================
          Starting the Oracle E-Business Suite Application tier Services
    ==========================================================
    Starting the Oracle E-Business Suite Application Tier Services
    You are running adstrtal.sh version 120.24.12020000.7
    
    The logfile for this session is located at /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adstrtal.log
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/jtffmctl.sh start
    Timeout specified in context file: 100 second(s)
    
    script returned:
    ****************************************************
    
    You are running jtffmctl.sh version 120.3.12020000.4
    
    Validating Fulfillment patch level via /u02/install/fs1/EBSapps/comn/java/classes
    Fulfillment patch level validated.
    Starting Fulfillment Server for VIS122 on port 9300 ...
    
    jtffmctl.sh: exiting with status 0
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/adopmnctl.sh start
    Timeout specified in context file: 100 second(s)
    
    script returned:
    ****************************************************
    
    You are running adopmnctl.sh version 120.0.12020000.2
    
    Starting Oracle Process Manager (OPMN) ...
    
    adopmnctl.sh: exiting with status 0
    
    adopmnctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adopmnctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/adapcctl.sh start
    Timeout specified in context file: 100 second(s)
    
    script returned:
    ****************************************************
    
    You are running adapcctl.sh version 120.0.12020000.2
    
    Starting OPMN managed Oracle HTTP Server (OHS) instance ...
    opmnctl start: opmn is already running.
    opmnctl startproc: starting opmn managed processes...
    
    adapcctl.sh: exiting with status 0
    
    adapcctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adapcctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/adnodemgrctl.sh start -nopromptmsg
    Timeout specified in context file: 100 second(s)
    
    script returned:
    ****************************************************
    
    You are running adnodemgrctl.sh version 120.11.12020000.4
    
    Calling txkChkEBSDependecies.pl to perform dependency checks for oacore
    Perl script txkChkEBSDependecies.pl got executed successfully
    
    Starting the Node Manager...
    NMProcess:   
    NMProcess: Dec 19, 2013 8:17:48 AM weblogic.nodemanager.server.NMServerConfig initDomainsMap
    NMProcess: INFO: Loading domains file: /u02/install/fs1/FMW_Home/wlserver_10.3/common/nodemanager/nmHome1/nodemanager.domains
    NMProcess:   
    NMProcess: Dec 19, 2013 8:17:48 AM weblogic.nodemanager.server.NMServer 
    NMProcess: WARNING: Node manager configuration properties file '/u02/install/fs1/FMW_Home/wlserver_10.3/common/nodemanager/nmHome1/nodemanager.properties' not found. Using default settings.
    NMProcess:   
    NMProcess: Dec 19, 2013 8:17:48 AM weblogic.nodemanager.server.NMServer 
    NMProcess: INFO: Saving node manager configuration properties to '/u02/install/fs1/FMW_Home/wlserver_10.3/common/nodemanager/nmHome1/nodemanager.properties'
    Refer /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adnodemgrctl.txt for details
    
    adnodemgrctl.sh: exiting with status 0
    
    adnodemgrctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adnodemgrctl.txt for more information ...
    
    .end std out.
    *** ALL THE FOLLOWING FILES ARE REQUIRED FOR RESOLVING RUNTIME ERRORS
    *** Log File = /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/rgf/TXK/txkChkEBSDependecies_Thu_Dec_19_08_17_07_2013/txkChkEBSDependecies_Thu_Dec_19_08_17_07_2013.log
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/adalnctl.sh start
    Timeout specified in context file: 100 second(s)
    
    script returned:
    ****************************************************
    
    adalnctl.sh version 120.3.12020000.2
    
    Checking for FNDFS executable.
    Starting listener process APPS_VIS122.
    
    adalnctl.sh: exiting with status 0
    
    adalnctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adalnctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/adcmctl.sh start
    Timeout specified in context file: 1000 second(s)
    
    script returned:
    ****************************************************
    
    You are running adcmctl.sh version 120.19.12020000.3
    
    Starting concurrent manager for VIS122 ...
    Starting VIS122_1219@VIS122 Internal Concurrent Manager
    Default printer is noprint
    
    adcmctl.sh: exiting with status 0
    
    adcmctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adcmctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/adadminsrvctl.sh start -nopromptmsg
    Timeout specified in context file: 1000 second(s)
    
    script returned:
    ****************************************************
    
    You are running adadminsrvctl.sh version 120.10.12020000.4
    
    Starting WLS Admin Server...
    Refer /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adadminsrvctl.txt for details
    
    adadminsrvctl.sh: exiting with status 0
    
    adadminsrvctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adadminsrvctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/admanagedsrvctl.sh start forms_server1 -nopromptmsg
    Timeout specified in context file: -1 second(s)
    
    script returned:
    ****************************************************
    
    You are running admanagedsrvctl.sh version 120.14.12020000.6
    
    Starting forms_server1...
    
    admanagedsrvctl.sh: exiting with status 0
    
    admanagedsrvctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adformsctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/admanagedsrvctl.sh start oafm_server1 -nopromptmsg
    Timeout specified in context file: -1 second(s)
    
    script returned:
    ****************************************************
    
    You are running admanagedsrvctl.sh version 120.14.12020000.6
    
    Starting oafm_server1...
    
    admanagedsrvctl.sh: exiting with status 0
    
    admanagedsrvctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adoafmctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/admanagedsrvctl.sh start forms-c4ws_server1 -nopromptmsg
    Timeout specified in context file: -1 second(s)
    
    script returned:
    ****************************************************
    
    You are running admanagedsrvctl.sh version 120.14.12020000.6
    
    Starting forms-c4ws_server1...
    
    admanagedsrvctl.sh: exiting with status 0
    
    admanagedsrvctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adforms-c4wsctl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    Executing service control script:
    /u02/install/fs1/inst/apps/VIS122_coriana6/admin/scripts/admanagedsrvctl.sh start oacore_server1 -nopromptmsg
    Timeout specified in context file: -1 second(s)
    
    script returned:
    ****************************************************
    
    You are running admanagedsrvctl.sh version 120.14.12020000.6
    
    Starting oacore_server1...
    
    admanagedsrvctl.sh: exiting with status 0
    
    admanagedsrvctl.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adoacorectl.txt for more information ...
    
    .end std out.
    
    .end err out.
    
    ****************************************************
    
    All enabled services for this node are started.
    
    adstrtal.sh: Exiting with status 0
    
    adstrtal.sh: check the logfile /u02/install/fs1/inst/apps/VIS122_coriana6/logs/appl/admin/log/adstrtal.log for more information ...
  12. At the end of the install, you’ll get another opportunity to change the oracle and root passwords (remember, these scripts were designed to run on separate servers), and an install summary:
     ==========================================================
          Change Passwords for the Default Users
    ==========================================================
    Changing password for user oracle.
    New UNIX password:
    BAD PASSWORD: it is based on a dictionary word
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully.
    Changing password for user root.
    New UNIX password:
    BAD PASSWORD: it is based on a dictionary word
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully.
    ===================INSTALLATION SUMMARY=============================
    Oracle E-Business Suite Installation Top Level Directory : /u02/install
    Oracle E-Business Suite Context File : /u02/install/fs1/inst/apps/VIS122_coriana6/appl/admin/VIS122_coriana6.xml
    Oracle E-Business Suite Login Page : http://coriana6.local.org:8000/OA_HTML/AppsLogin
    Oracle E-Business Suite Database Tier Host : coriana6.local.org
    Oracle E-Business Suite Database SID : VIS122
    Oracle E-Business Suite TNS_PORT : 1521
    ==================================================================
    
    Will continue in 60 seconds, or press any key to continue...

Now what?

Now the real fun begins! If you have added your VM’s IP and hostname to the local hosts file on your host machine, you can access your Vision instance at the URL listed after “Oracle E-Business Suite Login Page” at the end of the Apps tier Rapic Clone run. The default password for the admin user (SYSADMIN) is the same as it always is.

A few more things to note:

  • Use the startapps.sh and stopapps.sh scripts in /u02/install/scripts/ to (wait for it…) start and stop the Apps. Note that these scripts expect to be run by user oracle, not root. Also, these scripts have default passwords for the apps and weblogic users hard-coded in them, so if you change those passwords, you’ll need to update the scripts. Or you can just use the normal service start/stop scripts, but these wrapper scripts sure are convenient. ;)
  • Similarly, you’ll find start/stop scripts for database services in /u01/scripts (startdb.sh and stopdb.sh). These also need to be run as oracle.
  • Since this is a single-node system with both tiers owned by oracle, keeping your OS environment straight might take some practice. Try to remember to do database and apps work in separate sessions. To set the environment for the apps tier, use the script /u02/install/EBSapps.env (in most cases, you will want to set the “run” environment, e.g. “. /u02/install/EBSapps.env run” ). To set the environment for the database tier, invoke the /u01/install/11.2.0/SID_hostname.env file (e.g. “. /u01/install/11.2.0/VIS122_coriana6.env“). If you’re still learning Linux/Unix, please remember that you need to invoke these scripts with a leading “dot+space” as in the preceding examples, or the environment variable values will not be exported to your session.
  • The first time you try to log into your instance after starting it up, it will be probably be painfully slow. Subsequent logins and similar operations will merely be slow.
  • The first time I visited the login page and the initial Navigator page for the Vision instance, I had a few “Unable to load” errors for some components on the page. I found that a refresh of the page resolved these issues. I had similar issues upon launching Forms. It takes time for some of these things to be cached, and the default timeout values on the server might not be configured for workstation-grade test systems.
  • The WLS EM Console, if you need it, will be on port 7001 of your instance (e.g. http://coriana6.local.org:7001/em). Log in with user ‘weblogic’, with the password in the startapps.sh script.
  • The WLS Admin console for your instance is also on port 7001, at the following URL: http://your_host.your_domain:7001/console (e.g. http://coriana6.local.org:7001/console). Log in to this interface with the ‘weblogic’ user, too.
  • In general, unless you find issues with services not starting up cleanly, you probably won’t need the WLS URLs very much. Unless you’re an Apps DBA, and hopefully you’re not, because I said at the very beginning that this post is not for Apps DBAs. ;-)
  • Oh, and the 12.2.3 patchset has just been released, so if you’d like some patching practice… ;-)

I want to do something slightly different!

Maybe all of this just isn’t weird enough, and you would like to either:

  1. Install the Production OVM templates instead of the Vision templates (Hi John!), or
  2. Deploy the Vision templates in a two-node configuration

In either case, you should be able to find your way based on the instructions in this post. The only “magic” happening here is rescue boot and mkinitrd command to allow the use of the non-Xen kernel. After that, most of the steps should be more or less the same. In the first case listed above (production database templates), the only thing that changes is which database templates you download: parts V41175-V41177 instead of parts V41171-V41173. While I haven’t checked, presumably the network and db tier configuration script is going to be the same. In the second case (two-node configuration), you won’t need to worry about changing disk labels or altering /etc/fstab, but you will have to convert the Apps root disk to a VDI file, and do the mkinitrd steps twice. Also, when allocating resources for the two-node instance, I recommend a minimum of 6GB of memory for the app tier and 2GB for the database tier. Please note that I haven’t tried either approach; these are just educated guesses based on doc review and looking at the configuration scripts. If you’re successful with either of these alternative deployments, I’d love to hear about it in the comments!

Ok, we’re done! You can now give yourself (or an ORACLENERD you care about) the gift of an EBS 12.2.2 Vision instance, just in time for the holidays! Start downloading now, though, if you want to be ready by Christmas. ;)

Good luck, and happy playing!

Troubleshooting Oracle’s Auto Service Request

$
0
0

I’ve spent the better part of the day troubleshooting an issue with Oracle’s Auto Service Request (ASR) and wanted to share my results in case if saves someone else some effort.

The ASR manager is designed to be a side-wide aggregation point for ASR alerts, receiving SNMP traps and forwarding over https to transport.oracle.com. But if you’re using port 162 for SNMP traps on a Linux system, you may find that such traps are never sent to Oracle.

I was testing this by creating test traps through IPMI:

# ipmitool sunoem cli "set /SP/alertmgmt/rules/1 testrule=true"
 Connected. Use ^D to exit.
 -> set /SP/alertmgmt/rules/1 testrule=true
 Set 'testrule' to 'true'

 -> Session closed
Disconnected

This command should be passed onto Oracle and result in an e-mail noting a test service request had been created. But in my case, nothing came up.

/var/log/messages however did show a test trap generated:

Dec 19 16:12:23 asrmgr01 snmptrapd[14527]: 2013-12-19 16:12:23 testdb01.example.com [UDP: [43.218.200.118]:32957]: DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (51161892) 5 days, 22:06:58.92  SNMPv2-MIB::snmpTrapOID.0 = OID: SNMPv2-SMI::enterprises.42.2.175.103.2.0.63    SNMPv2-SMI::enterprises.42.2.175.103.2.1.1.0 = STRING: "Oracle Database Appliance X3-2 1234ABC12B"      SNMPv2-SMI::enterprises.42.2.175.103.2.1.14.0 = STRING: "1234ABC12B"    SNMPv2-SMI::enterprises.42.2.175.103.2.1.15.0 = STRING: "SUN FIRE X4170 M3"     SNMPv2-SMI::enterprises.42.2.175.103.2.1.20.0 = STRING: "This is a test trap"

But none of the ASR manager logs in /var/opt/SUNWsasm/log showed any indication of activity.

After a lot of digging, including copious logfile reading, straces, and tcpdumps, I found that the ASR manager process is not even listening for SNMP traps:

[root@asrmgr01 log]# lsof -p `pidof java` | grep UDP
java    31318 root   93u  IPv6           23334618      0t0      UDP *:41178

Searching for who’s holding the SNMP port 162, “snmptrap”

[root@asrmgr01 log]# lsof | grep UDP | grep ":snmptrap"
snmptrapd 28163 root    8u  IPv4           23357406      0t0      UDP *:snmptrap

It’s another complete process, snmptrapd.

[root@asrmgr01 log]# ps -ef | grep snmptrapd | grep -v grep
root      4986     1  0 Dec15 ?        00:00:04 /usr/sbin/snmptrapd -Lsd -p /var/run/snmptrapd.pid

Decoding the arguments from the command line, -Lsd sends “L”og messages to “s”yslog at “d”aemon priority. And it was these messages I had seen in /var/log/messages.

And a little more diffing in the ASR manager lgofile /var/opt/SUNWsasm/log/sasm.log does show a telling message:

2013-12-19_16:00:51  command executed:  sasm start-instance
Starting Oracle Automated Service Manager...
Cannot bind to port : 162

Unfortunately sasm continued to start, not reporting anything in stdout. It would have been much easier if it would have simply exited on a fatal error like this.

Anyways, the fix was quite simple: disabling snmptrapd on the ASR manager host:

chkconfig snmptrapd off
service snmptrapd stop
service sasm restart

And then my test traps start succeeding in generating e-mail alerts.

Do you have enough Redo?

$
0
0

The question of whether a database has enough redo logs available is quite common. The documentation suggests to use FAST_START_MTTR_TARGET and V$INSTANCE_RECOVERY.OPTIMAL_LOGFILE_SIZE to identify “the optimal” redo log size based on the target recovery time. I’ve never used it and can’t comment if it is a correct/reasonable way or not. The easiest way to identify if you need to increase the redo log files size is to check for ‘log file switch (checkpoint incomplete)’ waits, and, depending on the total wait time, decide how much more redo you need. The obvious source of such information is Statspack or AWR. I have written a query which pulls the necessary information and calculates the amount of redo you need to add based on the total wait time.

Here is the query and an example output from 10.2.0.4 database:

set pagesize 50000 linesize 300

col instance_number             format 99                   head 'In|st'
col tim                                                     head 'Period end'
col cpu_sec                     format 999,999,999.9        head 'CPU used|sec'
col phy_reads                   format 999,999,999          head 'Physical|reads'
col phy_writes                  format 999,999,999          head 'Physical|writes'
col cr_served                   format 999,999,999          head 'CR blocks|served'
col current_served              format 999,999,999          head 'CUR blocks|served'
col redo_mb                     format 999,999,999.9        head 'Redo, MB'
col processes                   format 999,999              head 'Proc|esses'
col avg_df_seq                  format 9,999.9              head 'Avg 1|read'
col avg_df_scat                 format 9,999.9              head 'Avg N|read'
col redo_diff_to_md_pct         format 999,999              head 'Redo Diff|to median, %'
col avg_lfpw                    format 999.99               head 'Avg|LFPW'
col avg_log_sync                format 9,999.99             head 'Avg Log|Sync, ms'
col log_ckpt_sec                format 999,999              head 'CKPT|waits, s'
col redo_needed                 format 999,999              head 'Redo to|Add, MB'

compute max of cpu_sec          on instance_number
compute max of phy_reads        on instance_number
compute max of phy_writes       on instance_number
compute max of cr_served        on instance_number
compute max of current_served   on instance_number
compute max of phy_writes       on instance_number
compute max of redo_needed      on instance_number
compute max of log_ckpt_sec     on instance_number
compute max of avg_log_sync     on instance_number
compute max of avg_lfpw         on instance_number
compute max of redo_mb          on instance_number
compute max of processes        on instance_number
compute max of avg_df_seq       on instance_number
compute max of avg_df_scat      on instance_number

break on instance_number skip page

with t_interval as
(
 select /*+ inline */ sysdate-30 begin, sysdate as end from dual
)
select
  stats.dbid                                                                 dbid,
  stats.instance_number                                                      instance_number,
  to_char(stats.snap_time, 'YYYYMMDD HH24MI')                                tim,
  stats.cpu_used / 100                                                       cpu_sec,
  stats.phy_reads                                                            phy_reads,
  stats.phy_writes                                                           phy_writes,
  stats.cr_served                                                            cr_served,
  stats.current_served                                                       current_served,
  stats.redo_size / 1024 / 1024                                              redo_mb,
  procs.current_utilization                                                     processes,
--
  waits.df_seq_micro / 1000 / nullif(waits.df_seq_waits,0)                   avg_df_seq,
  waits.df_scat_micro / 1000 / nullif(waits.df_scat_waits,0)                 avg_df_scat,
  (stats.redo_size - stats.md_redo_size) * 100 / stats.md_redo_size          redo_diff_to_md_pct,
  stats.redo_write_time*10/stats.redo_writes                                 avg_lfpw,
  waits.log_sync_micro/nullif(waits.log_sync_waits, 0) / 1000                avg_log_sync,
  waits.log_ckpt_micro/1e6                                                   log_ckpt_sec,
  ( stats.redo_size /
     ( nullif(waits.snap_interval, 0) * 86400 ) ) *
   ( waits.log_ckpt_micro/1e6 ) / 1024 / 1024                                redo_needed,
  stats.is_restart
from
  (
   select
     snap_id,
     snap_time,
     snap_interval,
     instance_number,
     dbid,
     log_sync_micro,
     log_sync_waits,
     log_ckpt_micro,
     log_ckpt_waits,
     df_seq_micro,
     df_seq_waits,
     df_scat_micro,
     df_scat_waits,
     direct_micro,
     direct_waits,
     median(log_sync_micro/nullif(log_sync_waits, 0)) over (partition by dbid, instance_number) md_log_sync_micro
   from
   (
      select
        snap_id,
        snap_time,
        instance_number,
        dbid,
        max(snap_interval) snap_interval,
        max(decode(event, 'log file sync',                            wait_micro))    log_sync_micro,
        max(decode(event, 'log file sync',                            total_waits))   log_sync_waits,
        max(decode(event, 'log file switch (checkpoint incomplete)',  wait_micro))    log_ckpt_micro,
        max(decode(event, 'log file switch (checkpoint incomplete)',  total_waits))   log_ckpt_waits,
        max(decode(event, 'db file sequential read',                  wait_micro))    df_seq_micro,
        max(decode(event, 'db file sequential read',                  total_waits))   df_seq_waits,
        max(decode(event, 'db file scattered read',                   wait_micro))    df_scat_micro,
        max(decode(event, 'db file scattered read',                   total_waits))   df_scat_waits,
        max(decode(event, 'direct path read',                         wait_micro))    direct_micro,
        max(decode(event, 'direct path read',                         total_waits))   direct_waits
      from
      (
        select
          e.snap_id,
          e.instance_number,
          e.dbid,
          sn.snap_time,
          snap_time - lag(snap_time) over (partition by e.dbid, e.instance_number, e.event order by sn.snap_time) snap_interval,
          sn.startup_time,
          e.event,
          case when (sn.snap_time >= sn.startup_time and lag(sn.snap_time) over (partition by e.dbid, e.instance_number, e.event order by sn.snap_time) < sn.startup_time)
            then e.time_waited_micro
            else e.time_waited_micro - lag(e.time_waited_micro) over (partition by e.dbid, e.instance_number, e.event order by sn.snap_time)
          end wait_micro,
          case when (sn.snap_time >= sn.startup_time and lag(sn.snap_time) over (partition by e.dbid, e.instance_number, e.event order by sn.snap_time) < sn.startup_time)
            then e.total_waits
            else e.total_waits - lag(e.total_waits) over (partition by e.dbid, e.instance_number, e.event order by sn.snap_time)
          end total_waits
        from
          stats$system_event e,
          stats$snapshot     sn,
          t_interval         t
        where
          sn.snap_id = e.snap_id and
          sn.dbid = e.dbid and
          sn.instance_number = e.instance_number and
          sn.snap_time between t.begin and t.end and
          e.event in (
            'log file sync',
            'log file switch (checkpoint incomplete)',
            'db file sequential read',
            'db file scattered read',
            'direct path read'
            )
      )
      group by dbid, instance_number, snap_time, snap_id
    )
  ) waits,
  (
    select
      snap_id,
      snap_time,
      instance_number,
      dbid,
      redo_size,
      redo_write_time,
      redo_writes,
      is_restart,
      cpu_used,
      phy_reads,
      phy_reads_cache,
      phy_writes,
      phy_writes_cache,
      cr_served,
      current_served,
      median(redo_size) over (partition by dbid, instance_number) md_redo_size
    from
    (
      select
        snap_id,
        snap_time,
        instance_number,
        dbid,
        max(is_restart) is_restart,
        max(decode(name, 'redo size',                       stat_diff)) redo_size,
        max(decode(name, 'redo write time',                 stat_diff)) redo_write_time,
        max(decode(name, 'redo writes',                     stat_diff)) redo_writes,
        max(decode(name, 'CPU used by this session',        stat_diff)) cpu_used,
        max(decode(name, 'physical read total IO requests', stat_diff)) phy_reads,
        max(decode(name, 'physical reads cache',            stat_diff)) phy_reads_cache,
        max(decode(name, 'physical write total IO requests',stat_diff)) phy_writes,
        max(decode(name, 'physical writes from cache',      stat_diff)) phy_writes_cache,
        max(decode(name, 'gc cr blocks served',             stat_diff)) cr_served,
        max(decode(name, 'gc current blocks served',        stat_diff)) current_served
      from
      (
        select
          stats.snap_id,
          stats.instance_number,
          stats.dbid,
          sn.snap_time,
          sn.startup_time,
          n.name,
          case when (sn.snap_time >= sn.startup_time and lag(sn.snap_time) over (partition by stats.dbid, stats.instance_number, stats.statistic# order by sn.snap_time) < sn.startup_time)
            then stats.value
            else stats.value - lag(stats.value) over (partition by stats.dbid, stats.instance_number, stats.statistic# order by stats.snap_id)
          end stat_diff,
          case when (sn.snap_time >= sn.startup_time and lag(sn.snap_time) over (partition by stats.dbid, stats.instance_number, stats.statistic# order by sn.snap_time) < sn.startup_time)
            then 'Yes'
          end is_restart
        from
          stats$sysstat      stats,
          stats$snapshot     sn,
          v$statname         n,
          t_interval         t
        where
          sn.snap_id = stats.snap_id and
          sn.dbid = stats.dbid and
          sn.instance_number = stats.instance_number and
          sn.snap_time between t.begin and t.end and
          stats.statistic# = n.statistic# and
          n.name in (
            'redo size',
            'redo write time',
            'redo writes',
            'CPU used by this session',
            'physical read total IO requests',
            'physical reads cache',
            'physical write total IO requests',
            'physical writes from cache',
            'gc cr blocks served',
            'gc current blocks served'
          )
      )
      group by dbid, instance_number, snap_time, snap_id
    )
  ) stats,
  (
    select
      stats.snap_id,
      stats.instance_number,
      stats.dbid,
      stats.resource_name,
      stats.current_utilization
    from
      stats$resource_limit stats,
      stats$snapshot       sn,
      t_interval           t
    where
      sn.snap_id = stats.snap_id and
      sn.dbid = stats.dbid and
      sn.instance_number = stats.instance_number and
      sn.snap_time between t.begin and t.end and
      stats.resource_name = 'processes'
  ) procs
where
  waits.dbid = stats.dbid and
  waits.instance_number = stats.instance_number and
  waits.snap_id = stats.snap_id and
  waits.dbid = procs.dbid and
  waits.instance_number = procs.instance_number and
  waits.snap_id = procs.snap_id
order by
 stats.dbid, stats.instance_number, stats.snap_time
;

 

                                Redo Diff     Avg   Avg Log     CKPT  Redo to
Period end          Redo, MB to median, %    LFPW  Sync, ms waits, s  Add, MB IS_
------------- -------------- ------------ ------- --------- -------- -------- ---
20131213 0800
20131213 0900        3,659.1           21    2.77      8.73       63       64
20131213 1000        4,964.8           64   15.06     15.45       90      125
20131213 1100        3,533.0           17   26.42      9.43      114      112
20131213 1200          206.5          -93   10.86      3.83        0        0
20131213 1300        1,363.5          -55   34.17     10.65       26       10
20131213 1400          133.4          -96    9.17      4.07        0        0
20131213 1500          123.7          -96    8.52      4.62        0        0
20131213 1600        2,037.8          -33   36.22      8.73       40       23
20131213 1700           13.6         -100    3.82      8.43        0        0
20131213 1800           60.7          -98   91.51    251.81                   Yes
20131213 1900        4,177.9           38    6.67     53.66       52       30
20131213 2000        3,145.3            4   39.61     12.79       41       35
20131213 2100        4,606.2           52   19.65      7.90       63       80
20131213 2200        3,303.8            9   77.90    115.53       73       67
20131213 2300        2,673.2          -12   57.37     85.43      123       91
20131214 0000        1,213.9          -60   56.35    155.38       83       28
20131214 0100        3,569.0           18  110.49    304.81      121      120
20131214 0200        5,587.7           84   17.92     26.62      123      191
20131214 0300        7,630.4          152   30.87     28.08      138      293
20131214 0400        5,706.7           88   37.36     39.41      179      284
20131214 0500        7,688.8          154   42.66     26.86      196      419
20131214 0600        1,377.6          -55   29.46     19.79       29       11
20131214 0700        2,176.3          -28   26.62     24.13       43       26
20131214 0800        2,179.6          -28   25.23     22.50       43       26
20131214 0900        3,892.0           28    2.54     14.32       93      101
20131214 1000        5,354.2           77   32.67     20.26      127      189
20131214 1100        4,232.6           40   49.24     86.27      108      127
20131214 1200        1,778.2          -41   25.76     10.03       29       14
20131214 1300           23.0          -99    4.07      6.14        0        0
20131214 1400        1,378.1          -55   32.63     14.99       23        9
20131214 1500        2,121.7          -30   34.60    285.24       38       22
20131214 1600          701.6          -77   39.29      9.90        7        1
20131214 1700        1,557.5          -49   43.51     52.96       24       10
20131214 1800          455.3          -85   35.41     57.68        1        0
20131214 1900        4,175.1           38    7.22     22.67       71       82
20131214 2000        3,116.7            3   45.78     20.85       72       63
20131214 2100        4,578.1           51   27.92      7.34       68       86
20131214 2200        3,751.4           24   22.54     25.00       73       76
20131214 2300        1,962.1          -35   52.08     46.90       33       18
20131215 0000        3,161.0            4   32.76     67.61       60       53
20131215 0100        6,516.9          115   46.88     52.77      147      267
20131215 0200        3,147.0            4   17.63     35.01      112       98
20131215 0300        5,084.3           68   50.88     95.94      164      232
20131215 0400        5,727.6           89   56.45     49.84      143      228
20131215 0500        7,783.3          157   59.43     53.49      198      428
20131215 0600        1,828.4          -40   68.17    136.70       46       23
20131215 0700          840.2          -72   39.24      8.99       20        5
20131215 0800        2,202.9          -27   28.78     28.14       42       26
20131215 0900        3,704.2           22    2.53     17.06       80       82
20131215 1000        4,847.2           60   32.70     57.14        6        8
20131215 1100        2,987.0           -1   29.51      9.84        0        0
20131215 1200          203.2          -93   16.36     10.84        0        0
20131215 1300           42.3          -99    1.12      2.00        0        0
20131215 1400          596.7          -80    9.57     12.94        0        0
20131215 1500          186.8          -94   10.67     11.89                   Yes
20131215 1600          942.0          -69    7.57     17.27
20131215 1700           14.5         -100    1.82      3.56
20131215 1800          462.6          -85    9.31      8.47
20131215 1900        3,762.0           24    4.79      8.23
20131215 2000        3,171.9            5   34.83     16.26
20131215 2100        4,525.7           49   28.30      6.52
20131215 2200        3,623.0           20   18.74     21.89        1        0
20131215 2300        2,625.7          -13   42.53     12.18        0        0
20131216 0000        2,256.5          -26   39.12     41.47        0        0
20131216 0100        6,061.9          100   93.86    129.36       11       19
20131216 0200        6,297.2          108   19.90     30.07        0        0
20131216 0300        5,079.6           68   23.35     59.40        0        0
20131216 0400        5,555.2           83   38.73     34.99        0        0
20131216 0500        7,560.7          150   46.22     61.08        0        0
20131216 0600        1,364.1          -55   35.62     34.54        0        0
20131216 0700        2,212.2          -27   35.83     16.32        0        0
20131216 0800        2,117.8          -30   25.73     12.24        0        0
20131216 0900        3,684.7           22   40.42     28.94        0        0
20131216 1000        6,502.9          115    5.65     29.35        0        1
20131216 1100        4,372.9           44   29.11     14.23        0        0
20131216 1200        1,742.8          -42   25.10     12.52        0        0
20131216 1300           31.1          -99    5.92     12.56        0        0
20131216 1400        1,486.6          -51   44.12     11.59        0        0
20131216 1500        6,108.9          102   36.05     38.18        0        0
20131216 1600          686.5          -77   27.43      7.14        0        0
20131216 1700        1,557.2          -49   22.99      7.41        0        0
20131216 1800          464.3          -85   26.43     25.12        0        0
20131216 1900        3,974.1           31    4.59      9.26        6        6
20131216 2000        3,102.7            2   30.15     28.33        0        0
20131216 2100        5,400.9           78   20.70      8.15        0        0
20131216 2200        3,130.8            3   11.57     15.79        0        0
20131216 2300        2,203.7          -27   48.37     63.28        0        0
20131217 0000        3,098.9            2   29.81     52.57        0        0
20131217 0100        3,909.3           29   71.10    169.28        0        0
20131217 0200        6,227.8          106   19.69     30.72        0        0
20131217 0300        7,325.3          142   27.64     45.12        0        0
20131217 0400        6,068.9          100   37.96     37.70        0        0
20131217 0500        7,231.6          139   35.11     26.07        0        0
20131217 0600        1,263.5          -58   24.18     13.22        0        0
20131217 0700        2,019.4          -33   25.45      5.84        0        0
20131217 0800        2,130.4          -30   22.27     12.88        0        0
20131217 0900        4,018.1           33    5.33     58.09        0        0
20131217 1000        5,193.5           71    6.36     63.90        2        3
20131217 1100        3,217.3            6   25.76     11.66        0        0
20131217 1200        1,779.8          -41   21.45      8.97        0        0
20131217 1300          734.1          -76   20.19      5.34        0        0
20131217 1400        2,697.0          -11   24.35      8.52        0        0
20131217 1500          122.0          -96   36.35     18.74        0        0
20131217 1600          817.4          -73   29.34      7.29        0        0
20131217 1700        1,616.1          -47   22.52      8.10        0        0
20131217 1800          466.8          -85   43.67     73.78        0        0
20131217 1900        3,915.3           29    4.58     10.88        0        0
20131217 2000        3,082.5            2   35.76     21.84        0        0
20131217 2100        5,320.9           76   20.40      8.94        0        0
20131217 2200        3,020.1           -0   52.09     44.66        1        1
20131217 2300        2,034.3          -33   43.56     62.13        0        0
20131218 0000        2,723.5          -10   30.35     51.05        0        0
20131218 0100        4,740.9           56   61.93    124.11        0        0
20131218 0200        7,426.3          145   19.91     32.82        0        0
20131218 0300        4,703.2           55   21.78     28.80        0        0
20131218 0400        5,658.4           87   37.75     20.70        0        0
20131218 0500        7,858.4          159   42.53     29.13        0        0
20131218 0600        1,304.4          -57   30.40      9.91        0        0
20131218 0700        2,143.6          -29   28.20     13.88        1        1
20131218 0800        2,078.0          -31   24.94     18.97        0        0
20131218 0900        3,833.6           27    4.17     22.95        2        2
20131218 1000        5,933.3           96   10.73     19.28        0        0
20131218 1100        2,533.8          -16   23.92     11.09        0        0
20131218 1200        1,775.0          -41   21.39      9.89        0        0
20131218 1300           31.5          -99    3.95      4.02        0        0
20131218 1400        1,916.7          -37   28.73     15.65        0        0
20131218 1500        1,794.5          -41   34.64    215.95        0        0
20131218 1600          876.2          -71   27.94      4.64        0        0
20131218 1700        1,544.7          -49   21.90      6.72        0        0
20131218 1800          465.2          -85   26.04     25.71        0        0
20131218 1900        3,662.1           21    4.19      7.11        0        0
20131218 2000        3,085.3            2   34.51     16.00        0        0
20131218 2100        5,286.9           75   20.71      7.70        0        0
20131218 2200        3,072.3            1   15.16     11.18        0        0
20131218 2300        2,489.8          -18   22.96     17.53        0        0
20131219 0000        2,249.1          -26   31.52     77.49        2        1
20131219 0100        3,466.3           14  169.29    321.38        7        7
20131219 0200        4,365.9           44   11.78     25.60        0        0
20131219 0300        9,837.6          225   32.10     42.25       12       33
20131219 0400        7,251.2          139   42.75     27.18        0        0
20131219 0500        8,428.8          178   40.70     22.69        0        0
20131219 0600        1,304.1          -57   25.98      8.51        0        0
20131219 0700        2,026.6          -33   28.31     14.68        0        0
20131219 0800        2,413.5          -20    8.62      1.15        0        0
20131219 0900        4,025.8           33    3.17     19.64        0        0
...

This query is large and can print not only redo-related statistics, you just need to uncomment then necessary columns. The columns which are relevant to this post are “CKPT waits, s” and “Redo To Add, MB”. First is total time wasted due to ‘log file switch (checkpoint incomplete)’ waits, and the second is the projected additional amount of redo which might have been enough to reduce the waits down to minimum. When calculating this projected size, the average redo rate per hour is used which is then multiplied by the total time waited. Of course this is an incorrect approximation: if you had 10 sessions waiting for the checkpoint to complete, then the total time waited by sessions is obviously more than wall clock time, which means that multiplying it by an average redo rate is totally wrong; but it is almost correct with a database that at any point in time has only 1 session waiting for checkpoint to complete. Nevertheless I think this approximation is a good starting point.
In this particular case, the database had 4 groups of redo of 60MB each, and on December 15th it has been changed to 4 groups of 150MB each. As you can see it helped to reduce the number of ‘checkpoint incomplete’ waits significantly, although not completely. If you look at the redo rate per hour, it is clear that after the redo log size change the rate of database changes has also increased, so the waits are somewhat consequence of the ability to generate more changes per second now.

And just for the sake of completeness here is a version of this query for AWR. I didn’t test it much but it should be fine as most things are similar to Statspack tables. Enjoy!

set pagesize 50000 linesize 300

col instance_number             format 99                   head 'In|st'
col tim                                                     head 'Period end'
col cpu_sec                     format 999,999,999.9        head 'CPU used|sec'
col phy_reads                   format 999,999,999          head 'Physical|reads'
col phy_writes                  format 999,999,999          head 'Physical|writes'
col cr_served                   format 999,999,999          head 'CR blocks|served'
col current_served              format 999,999,999          head 'CUR blocks|served'
col redo_mb                     format 999,999,999.9        head 'Redo, MB'
col processes                   format 999,999              head 'Proc|esses'
col avg_df_seq                  format 9,999.9              head 'Avg 1|read'
col avg_df_scat                 format 9,999.9              head 'Avg N|read'
col redo_diff_to_md_pct         format 999,999              head 'Redo Diff|to median, %'
col avg_lfpw                    format 999.99               head 'Avg|LFPW'
col avg_log_sync                format 9,999.99             head 'Avg Log|Sync, ms'
col log_ckpt_sec                format 999,999              head 'CKPT|waits, s'
col redo_needed                 format 999,999              head 'Redo to|Add, MB'

compute max of cpu_sec          on instance_number
compute max of phy_reads        on instance_number
compute max of phy_writes       on instance_number
compute max of cr_served        on instance_number
compute max of current_served   on instance_number
compute max of phy_writes       on instance_number
compute max of redo_needed      on instance_number
compute max of log_ckpt_sec     on instance_number
compute max of avg_log_sync     on instance_number
compute max of avg_lfpw         on instance_number
compute max of redo_mb          on instance_number
compute max of processes        on instance_number
compute max of avg_df_seq       on instance_number
compute max of avg_df_scat      on instance_number

break on instance_number skip page

with t_interval as
(
 select /*+ inline */ sysdate-30 begin, sysdate as end from dual
)
select
  stats.dbid                                                                dbid,
  stats.instance_number                                                     instance_number,
  to_char(stats.begin_interval_time, 'YYYYMMDD HH24MI')                     tim,
  stats.cpu_used / 100                                                      cpu_sec,
  stats.phy_reads                                                           phy_reads,
  stats.phy_writes                                                          phy_writes,
  stats.cr_served                                                           cr_served,
  stats.current_served                                                      current_served,
  stats.redo_size / 1024 / 1024                                             redo_mb,
  procs.current_utilization                                                 processes,
--
  waits.df_seq_micro / 1000 / nullif(waits.df_seq_waits,0)                  avg_df_seq,
  waits.df_scat_micro / 1000 / nullif(waits.df_scat_waits,0)                avg_df_scat,
  (stats.redo_size - stats.md_redo_size) * 100 / stats.md_redo_size         redo_diff_to_md_pct,
  stats.redo_write_time*10/stats.redo_writes                                avg_lfpw,
  waits.log_sync_micro/nullif(waits.log_sync_waits, 0) / 1000               avg_log_sync,
  waits.log_ckpt_micro/1e6                                                  log_ckpt_sec,
  ( stats.redo_size /
     ( waits.snap_interval * 86400 ) ) *
   ( waits.log_ckpt_micro/1e6 ) / 1024 / 1024                               redo_needed,
  stats.is_restart
from
  (
   select
     snap_id,
     begin_interval_time,
     snap_interval,
     instance_number,
     dbid,
     log_sync_micro,
     log_sync_waits,
     log_ckpt_micro,
     log_ckpt_waits,
     df_seq_micro,
     df_seq_waits,
     df_scat_micro,
     df_scat_waits,
     direct_micro,
     direct_waits,
     median(log_sync_micro/nullif(log_sync_waits, 0)) over (partition by dbid, instance_number) md_log_sync_micro
   from
   (
      select
        snap_id,
        begin_interval_time,
        instance_number,
        dbid,
        max(snap_interval) snap_interval,
        max(decode(event_name, 'log file sync',                            wait_micro))    log_sync_micro,
        max(decode(event_name, 'log file sync',                            total_waits))   log_sync_waits,
        max(decode(event_name, 'log file switch (checkpoint incomplete)',  wait_micro))    log_ckpt_micro,
        max(decode(event_name, 'log file switch (checkpoint incomplete)',  total_waits))   log_ckpt_waits,
        max(decode(event_name, 'db file sequential read',                  wait_micro))    df_seq_micro,
        max(decode(event_name, 'db file sequential read',                  total_waits))   df_seq_waits,
        max(decode(event_name, 'db file scattered read',                   wait_micro))    df_scat_micro,
        max(decode(event_name, 'db file scattered read',                   total_waits))   df_scat_waits,
        max(decode(event_name, 'direct path read',                         wait_micro))    direct_micro,
        max(decode(event_name, 'direct path read',                         total_waits))   direct_waits
      from
      (
        select
          e.snap_id,
          e.instance_number,
          e.dbid,
          sn.begin_interval_time,
          cast(begin_interval_time as date) - cast(lag(begin_interval_time) over (partition by e.dbid, e.instance_number, e.event_name order by sn.begin_interval_time) as date) snap_interval,
          sn.startup_time,
          e.event_name,
          case when (sn.begin_interval_time >= sn.startup_time and lag(sn.begin_interval_time) over (partition by e.dbid, e.instance_number, e.event_name order by sn.begin_interval_time) < sn.startup_time)
            then e.time_waited_micro
            else e.time_waited_micro - lag(e.time_waited_micro) over (partition by e.dbid, e.instance_number, e.event_name order by sn.begin_interval_time)
          end wait_micro,
          case when (sn.begin_interval_time >= sn.startup_time and lag(sn.begin_interval_time) over (partition by e.dbid, e.instance_number, e.event_name order by sn.begin_interval_time) < sn.startup_time)
            then e.total_waits
            else e.total_waits - lag(e.total_waits) over (partition by e.dbid, e.instance_number, e.event_name order by sn.begin_interval_time)
          end total_waits
        from
          dba_hist_system_event e,
          dba_hist_snapshot     sn,
          t_interval            t
        where
          sn.snap_id = e.snap_id and
          sn.dbid = e.dbid and
          sn.instance_number = e.instance_number and
          sn.begin_interval_time between t.begin and t.end and
          e.event_name in (
            'log file sync',
            'log file switch (checkpoint incomplete)',
            'db file sequential read',
            'db file scattered read',
            'direct path read'
          )
      )
      group by dbid, instance_number, begin_interval_time, snap_id
    )
  ) waits,
  (
    select
      snap_id,
      begin_interval_time,
      instance_number,
      dbid,
      redo_size,
      redo_write_time,
      redo_writes,
      is_restart,
      cpu_used,
      phy_reads,
      phy_reads_cache,
      phy_writes,
      phy_writes_cache,
      cr_served,
      current_served,
      median(redo_size) over (partition by dbid, instance_number) md_redo_size
    from
    (
      select
        snap_id,
        begin_interval_time,
        instance_number,
        dbid,
        max(is_restart) is_restart,
        max(decode(stat_name, 'redo size',                       stat_diff)) redo_size,
        max(decode(stat_name, 'redo write time',                 stat_diff)) redo_write_time,
        max(decode(stat_name, 'redo writes',                     stat_diff)) redo_writes,
        max(decode(stat_name, 'CPU used by this session',        stat_diff)) cpu_used,
        max(decode(stat_name, 'physical read total IO requests', stat_diff)) phy_reads,
        max(decode(stat_name, 'physical reads cache',            stat_diff)) phy_reads_cache,
        max(decode(stat_name, 'physical write total IO requests',stat_diff)) phy_writes,
        max(decode(stat_name, 'physical writes from cache',      stat_diff)) phy_writes_cache,
        max(decode(stat_name, 'gc cr blocks served',             stat_diff)) cr_served,
        max(decode(stat_name, 'gc current blocks served',        stat_diff)) current_served
      from
      (
        select
          stats.snap_id,
          stats.instance_number,
          stats.dbid,
          sn.begin_interval_time,
          sn.startup_time,
          stats.stat_name,
          case when (sn.begin_interval_time >= sn.startup_time and lag(sn.begin_interval_time) over (partition by stats.dbid, stats.instance_number, stats.stat_id order by sn.begin_interval_time) < sn.startup_time)
            then stats.value
            else stats.value - lag(stats.value) over (partition by stats.dbid, stats.instance_number, stats.stat_id order by stats.snap_id)
          end stat_diff,
          case when (sn.begin_interval_time >= sn.startup_time and lag(sn.begin_interval_time) over (partition by stats.dbid, stats.instance_number, stats.stat_id order by sn.begin_interval_time) < sn.startup_time)
            then 'Yes'
          end is_restart
        from
          dba_hist_sysstat   stats,
          dba_hist_snapshot  sn,
          t_interval         t
        where
          sn.snap_id = stats.snap_id and
          sn.dbid = stats.dbid and
          sn.instance_number = stats.instance_number and
          sn.begin_interval_time between t.begin and t.end and
          stats.stat_name in (
            'redo size',
            'redo write time',
            'redo writes',
            'CPU used by this session',
            'physical read total IO requests',
            'physical reads cache',
            'physical write total IO requests',
            'physical writes from cache',
            'gc cr blocks served',
            'gc current blocks served'
          )
      )
      group by dbid, instance_number, begin_interval_time, snap_id
    )
  ) stats,
  (
    select
      stats.snap_id,
      stats.instance_number,
      stats.dbid,
      stats.resource_name,
      stats.current_utilization
    from
      dba_hist_resource_limit stats,
      dba_hist_snapshot       sn,
      t_interval              t
    where
      sn.snap_id = stats.snap_id and
      sn.dbid = stats.dbid and
      sn.instance_number = stats.instance_number and
      sn.begin_interval_time between t.begin and t.end and
      stats.resource_name = 'processes'
  ) procs
where
  waits.dbid = stats.dbid and
  waits.instance_number = stats.instance_number and
  waits.snap_id = stats.snap_id and
  waits.dbid = procs.dbid and
  waits.instance_number = procs.instance_number and
  waits.snap_id = procs.snap_id
order by
 stats.dbid, stats.instance_number, stats.begin_interval_time
;

Statistics gathering and SQL Tuning Advisor

$
0
0

Our monitoring software found a long running job on one of our client’s databases. The job was an Oracle’s auto task running statistics gathering for more than 3 hours. I was curious to know why it took so long and used a query to ASH to find out the most common SQL during the job run based on the module name. Results were surprising to me: top SQL was coming with SQL Tuning Advisor comment.

Here is the SQL I used:

SQL> select s.sql_id, t.sql_text, s.cnt
  2  from
  3    (select *
  4     from
  5      (
  6        select sql_id, count(*) cnt
  7        from v$active_session_history
  8        where action like 'ORA$AT_OS_OPT_SY%'
  9        group by sql_id
 10        order by count(*) desc
 11      )
 12     where rownum <= 5
 13    ) s,
 14    dba_hist_sqltext t
 15  where s.sql_id = t.sql_id;

SQL_ID        SQL_TEXT                                                                                CNT
------------- -------------------------------------------------------------------------------- ----------
020t65s3ah2pq select substrb(dump(val,16,0,32),1,120) ep, cnt from (select /*+ no_expand_table        781
byug0cc5vn416 /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)          43
bkvvr4azs1n6z /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)          21
46sy4dfg3xbfn /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)        1559

So most queries are coming with “SQL Analyze” comment right in the beginning of SQL which is running from DBMS_STATS call, which is confusing. After some bug search I have found a MOS Doc ID 1480132.1 which includes a PL/SQL stack trace from a DBMS_STATS procedure call, and it is going up to DBMS_SQLTUNE_INTERNAL, which means DBMS_STATS has a call to the SQL Tuning package; very odd:

SQL> select * from dba_dependencies where name = 'DBMS_STATS_INTERNAL' and referenced_name = 'DBMS_SQLTUNE_INTERNAL';

OWNER                          NAME                           TYPE               REFERENCED_OWNER       REFERENCED_NAME
------------------------------ ------------------------------ ------------------ ------------------------------ ----------------------------------
REFERENCED_TYPE    REFERENCED_LINK_NAME                                                                                                     DEPE
------------------ -------------------------------------------------------------------------------------------------------------------------------
SYS                            DBMS_STATS_INTERNAL            PACKAGE BODY       SYS                    DBMS_SQLTUNE_INTERNAL
PACKAGE                                                                                                                                     HARD

Turns out, this call has nothing to do with the SQL Tuning. It is just a call to a procedure in this package, which happen to look like an SQL from SQL Tuning Advisor. I have traced a GATHER_TABLE_STATS call in a test database first with SQL trace and then with DBMS_HPROF and here is how the call tree looks like:

SELECT RPAD(' ', (level-1)*2, ' ') || fi.owner || '.' || fi.module AS name,
       fi.function,
       pci.subtree_elapsed_time,
       pci.function_elapsed_time,
       pci.calls
FROM   dbmshp_parent_child_info pci
       JOIN dbmshp_function_info fi ON pci.runid = fi.runid AND pci.childsymid = fi.symbolid
WHERE  pci.runid = 1
CONNECT BY PRIOR childsymid = parentsymid
  START WITH pci.parentsymid = 27;
NAME                                     FUNCTION                       SUBTREE_ELAPSED_TIME FUNCTION_ELAPSED_TIME                CALLS
---------------------------------------- ------------------------------ -------------------- --------------------- --------------------
...
SYS.DBMS_STATS_INTERNAL                  GATHER_SQL_STATS                           21131962                 13023                    1
  SYS.DBMS_ADVISOR                       __pkg_init                                       88                    88                    1
  SYS.DBMS_SQLTUNE_INTERNAL              GATHER_SQL_STATS                           21118776                  9440                    1
    SYS.DBMS_SQLTUNE_INTERNAL            I_PROCESS_SQL                              21107094              21104225                    1
      SYS.DBMS_LOB                       GETLENGTH                                        37                    37                    1
      SYS.DBMS_LOB                       INSTR                                            42                    42                    1
      SYS.DBMS_LOB                       __pkg_init                                       15                    15                    1
      SYS.DBMS_SQLTUNE_INTERNAL          I_VALIDATE_PROCESS_ACTION                        74                    39                    1
        SYS.DBMS_UTILITY                 COMMA_TO_TABLE                                   35                    35                    1
      SYS.DBMS_SQLTUNE_UTIL0             SQLTEXT_TO_SIGNATURE                            532                   532                    1
      SYS.DBMS_SQLTUNE_UTIL0             SQLTEXT_TO_SQLID                                351                   351                    1
      SYS.XMLTYPE                        XMLTYPE                                        1818                  1818                    1
    SYS.DBMS_SQLTUNE_UTIL0               SQLTEXT_TO_SQLID                                528                   528                    1
    SYS.DBMS_UTILITY                     COMMA_TO_TABLE                                   88                    88                    1
    SYS.DBMS_UTILITY                     __pkg_init                                       10                    10                    1
    SYS.SQLSET_ROW                       SQLSET_ROW                                       33                    33                    1
    SYS.XMLTYPE                          XMLTYPE                                        1583                  1583                    1
  SYS.DBMS_STATS_INTERNAL                DUMP_PQ_SESSTAT                                  73                    73                    1
  SYS.DBMS_STATS_INTERNAL                DUMP_QUERY                                        2                     2                    1
...

So there is a procedure DBMS_SQLTUNE_INTERNAL.GATHER_SQL_STATS which is being called by DBMS_STATS_INTERNAL, and this procedure actually runs a SQL like this:

/* SQL Analyze(0) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  */to_char(count("ID")),to_char(substrb(dump(min("ID"),16,0,32),1,120)),to_char(substrb(dump(max("ID"),16,0,32),1,120)),to_char(count("X")),to_char(substrb(dump(min("X"),16,0,32),1,120)),to_char(substrb(dump(max("X"),16,0,32),1,120)),to_char(count("Y")),to_char(substrb(dump(min("Y"),16,0,32),1,120)),to_char(substrb(dump(max("Y"),16,0,32),1,120)),to_char(count("PAD")),to_char(substrb(dump(min("PAD"),16,0,32),1,120)),to_char(substrb(dump(max("PAD"),16,0,32),1,120)) from "TIM"."T1" t  /* NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL*/

Which is basically approximate NDV calculation. So, nothing to be afraid of, it’s just the way the code is organized: DBMS_STATS uses API of SQL Tuning framework when you are using DBMS_STATS.AUTO_SAMPLE_SIZE as the ESTIMATE_PERCENT (which is the default & recommended value in 11g+).

Disabling Triggers in Oracle 11.2.0.4

$
0
0

In March 2012, I put together a blog post entitled Disabling Oracle triggers on a per-session basis, outlining a way to suspend trigger execution for the current session through a PL/SQL call. Commenter Bryan posted a comment saying he couldn’t get it working in 11.2.0.4:

Unfortunately Oracle seems to have disabled this use in 11.2.0.4, and most likely 12.1 as well. Boo-Hiss! This is needed functionality for DBAs!

A new parameter: enable_goldengate_replication

I tried this on an Oracle 11.2.0.4 system, and I indeed got an error:

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-26947: Oracle GoldenGate replication is not enabled.
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

A quick look at oerr gives a path forward, assuming you do indeed have a GoldenGate license:

[oracle@ora11gr2b ~]$ oerr ora 26947
26947, 00000, "Oracle GoldenGate replication is not enabled."
// *Cause: The 'enable_goldengate_replication' parameter was not set to 'true'.
// *Action: Set the 'enable_goldengate_replication' parameter to 'true'
//           and retry the operation.
//          Oracle GoldenGate license is needed to use this parameter.

The Oracle reference gives a bit more info

ENABLE_GOLDENGATE_REPLICATION controls services provided by the RDBMS for Oracle GoldenGate (both capture and apply services). Set this to true to enable RDBMS services used by Oracle GoldenGate.

The RDBMS services controlled by this parameter also include (but are not limited to):

Service to suppress triggers used by GoldenGate Replicat

As do the GoldenGate 12.1.2 docs:

The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat.

To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting.

ENABLE_GOLDENGATE_REPLICATION=true

So here goes nothing:

SQL> alter system set enable_goldengate_replication=true;

System altered.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

Another error: missing privileges. I checked and double-checked that the required GoldenGate privileges were indeed assigned.

Tracing and permission checks

It’s time to run a 100046 trace (SQL trace) to see what’s really going on.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

And tracefile does show some interesting information. A few of the more interesting snippets:

PARSING IN CURSOR #140324121137184 len=76 dep=0 uid=91 oct=47 lid=91 tim=1388531465245781 hv=1323338123 ad='6c1f63a0' sqlid='gvq73797f12cb'
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;
END OF STMT
...
PARSING IN CURSOR #140324121064984 len=187 dep=1 uid=0 oct=3 lid=0 tim=1388531465246387 hv=2028900049 ad='6c128db8' sqlid='aa9h2ajwfx3qj'
SELECT COUNT(*) FROM ( SELECT GP.USERNAME FROM DBA_GOLDENGATE_PRIVILEGES GP WHERE GP.USERNAME = :B1 UNION ALL SELECT GRANTEE FROM DBA_ROLE_PRIVS WHERE GRANTEE=:B1 AND GRANTED_ROLE='DBA' )
END OF STMT
...
 Bind#0
...
  value="GGS"
...
 Bind#1
...
  value="GGS"
...

The SQL statement is actually checking two things. The first is looking for the current username in the dba_goldengate_privileges view. This view isn’t listed in the Oracle 11.2 documentation, but it does appear in the 12c docs:

ALL_GOLDENGATE_PRIVILEGES displays details about Oracle GoldenGate privileges for the user.

Oracle GoldenGate privileges are granted using the DBMS_GOLDENGATE_AUTH package.

Related Views

DBA_GOLDENGATE_PRIVILEGES displays details about Oracle GoldenGate privileges for all users who have been granted Oracle GoldenGate privileges.

USER_GOLDENGATE_PRIVILEGES displays details about Oracle GoldenGate privileges. This view does not display the USERNAME column.

I had previously run dbms_goldengate_auth to grant privs here, so should be OK.

The second check simply verifies that the DBA role had been granted to the current user, again as recommended by the documentation. (A side note: in previous versions, I had avoided granting the overly broad DBA role to the GoldenGate user in favor of specific grants for the objects it uses. There’s no reason for the GoldenGate user to need to read and modify data objects that aren’t related to its own replication activities for example. And I would argue that it helps avoid errors such as putting the wrong schema in a map statement when permissions are restricted. But sadly it’s no longer possible in the world of 11.2.0.4.)

Running the query manually to verify that the grants are indeed in place:

SQL> SELECT COUNT(*) FROM ( SELECT GP.USERNAME FROM DBA_GOLDENGATE_PRIVILEGES GP WHERE GP.USERNAME = 'GGS'
UNION ALL SELECT GRANTEE FROM DBA_ROLE_PRIVS WHERE GRANTEE='GGS' AND GRANTED_ROLE='DBA' );

  COUNT(*)
----------
         2

Looks good, so that doesn’t seem to be the problem.

Tracing #2: system properties

Back to the 10046 tracefile:

PARSING IN CURSOR #140324119717656 len=45 dep=1 uid=0 oct=3 lid=0 tim=1388531465253124 hv=3393782897 ad='78ae2b40' sqlid='9p6bq1v54k13j'
select value$ from sys.props$ where name = :1
END OF STMT
...
 Bind#0
...
  value="GG_XSTREAM_FOR_STREAMS"
...
FETCH #140324119717656:c=0,e=44,p=0,cr=2,cu=0,mis=0,r=0,dep=1,og=1,plh=415205717,tim=1388531465254441

Because this SQL statement involves an ordinary select without an aggregate function, I can look at the FETCH line in the tracefile to get the number of rows returned. In this case it’s r=0, meaning no rows returned.

The query itself is looking for a system property I haven’t seen before: GG_XSTREAM_FOR_STREAMS. A Google search returns only a single result: the PDF version of the Oracle 11.2 XStream guide. Quoting:

ENABLE_GG_XSTREAM_FOR_STREAMS Procedure
This procedure enables XStream capabilities and performance optimizations for Oracle
Streams components.
This procedure is intended for users of Oracle Streams who want to enable XStream
capabilities and optimizations. For example, you can enable the optimizations for an
Oracle Streams replication configuration that uses capture processes and apply
processes to replicate changes between Oracle databases.
These capabilities and optimizations are enabled automatically for XStream
components, such as outbound servers, inbound servers, and capture processes that
send changes to outbound servers. It is not necessary to run this procedure for
XStream components.
When XStream capabilities are enabled, Oracle Streams components can stream ID key
LCRs and sequence LCRs. The XStream performance optimizations improve efficiency
in various areas, including:
? LCR processing
? Handling large transactions
? DML execution during apply
? Dependency computation and scheduling
? Capture process parallelism

On the surface, I don’t see what this would have to do with trigger execution, but I’m going to try enabling it as per the newly read document anyway:

SQL> exec dbms_xstream_adm.ENABLE_GG_XSTREAM_FOR_STREAMS(enable=>true);

PL/SQL procedure successfully completed.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);
BEGIN sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true); END;

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_XSTREAM_GG_INTERNAL", line 46
ORA-06512: at "SYS.DBMS_XSTREAM_GG", line 13
ORA-06512: at line 1

No dice.

Tracing #3: process names

Onto the next SQL in the tracefile:

PARSING IN CURSOR #140324120912848 len=114 dep=1 uid=0 oct=3 lid=0 tim=1388531465255628 hv=1670585998 ad='6c2d6098' sqlid='a9mwtndjt67nf'
SELECT COUNT(*) FROM V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.PROGRAM LIKE 'extract%' AND p.spid = :1
END OF STMT
...
 Bind#0
...
  value="2293"

Now we look in v$session, to see if a session associated with the process with OS PID 2293 (which happens to be the SPID of our current shadow process) has a PROGRAM column starting with the word extract. extract is, naturally, the name of the GoldenGate executable that captures data from the source system. In a GoldenGate system, however, trigger suppression does not happen in the extract process at all, but rather the replicat process that applies changes on the target system. So I’m going to skip this check and move on to the next one in the tracefile:

PARSING IN CURSOR #140324120905624 len=169 dep=1 uid=0 oct=3 lid=0 tim=1388531465257346 hv=3013382849 ad='6c122b38' sqlid='38pkvxattt4q1'
SELECT COUNT(*) FROM V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND (S.MODULE LIKE 'OGG%' OR S.MODULE = 'GoldenGate') AND S.PROGRAM LIKE 'replicat%' AND p.spid = :1
END OF STMT
...
 Bind#0
...
  value="2293"

This SQL is similar to the previous one, but instead of looking for a program called extract, it looks for one called replicat, and adds an extra check, so see if the module column either starts with OGG or is called GoldenGate. And since it’s the replicat process that does trigger disabling in GoldenGate, this check is likely to be related.

To make this check succeed, I’m going to have to change both the program and module columns in v$session for the current session. of the two, module is much easier to modify: a single call to dbms_application_info.set_module. But modifying program is less straightforward. One approach is to use Java code with Oracle’s JDBC Thin driver and setting the aptly-named v$session.program property, as explained in De Roeptoeter. But I’m hoping to stay with something I can do in SQL*Plus. If you’ve looked through a packet trace of a SQL*Net connection being established, you will know that the program name is passed by the client at the time of connection establishment, so could be modified by either modifying the network packet in transit. This is also complex to get working, as it also involves fixing checksums and the like. There’s a post on Slavik’s blog with a sample OCI C program that modifies its program information. Again more complexity thn I’d like, but it gave me an idea: if the program is populated by the name of the client-side executable, why don’t we simply copy sqlplus to a name that the dbms_xstream_gg likes better?

[oracle@ora11gr2b ~]$ cp $ORACLE_HOME/bin/sqlplus ./replicat
[oracle@ora11gr2b ~]$ ./replicat ggs

SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 30 14:09:05 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> exec dbms_application_info.set_module('OGG','');

PL/SQL procedure successfully completed.

SQL> exec sys.dbms_xstream_gg.set_foo_trigger_session_contxt(fire=>true);

PL/SQL procedure successfully completed.

Success!

Wrapping up

So it looks like you can disable triggers per-session in 11.2.0.4 just like previous versions, but need to jump through a few more hoops to do it. A few conclusions to draw:

  • Oracle patchsets, while normally intended to include bugfixes, can have major changes to underlying functionality too. See Jeremy Schneider’s post on adaptive log file sync for an even more egregious example. So before applying a patchset, test thoroughly!
  • The enforcement of full DBA privileges for the GoldenGate user in Oracle 11.2.0.4 requires very broad permissions to use GoldenGate, which can be a concern in security-conscious or consolidated environments.

TL;DR: Yes you can still disable triggers per-session in Oracle 11.2.0.4, but you have to have a GoldenGate license, set the enable_goldengate_replication parameter, use a program name that starts with replicat, and set your module to OGG.

Do AWR Reports Show the Whole Picture?

$
0
0

AWR report is a great source of aggregated information on top activities happening in our databases. I use data collected in AWR quite often, and obviously the easiest way of getting the data out from the AWR is by running the AWR report. In most cases that’s not an issue, but there are certain scenarios when it hides the information one is looking for, just because of how it’s designed.

If I’m trying to collect information about top queries by physical reads, I would normally look at the “SQL ordered by Reads” section and this is what I’d see:

AWR DIsk readsI have the top SQLs by physical reads – just what I’ve been looking for (except the fact that AWR report covers only one of my RAC nodes).

But wait a second, what if there are queries that don’t use bind variables? This might be a problem as each query would have it’s own SQL_ID and probably they wouldn’t make it into the TOP10 just because each of them is treated separately. Nothing to worry about – AWR also collects FORCE_MATCHING_SIGNATURE values (read this blog post to understand why I know they would help) and we can use them to identify and group “similar” statements, we just need a custom script to do that.

Here I use my custom script to report TOP 20 SQL_IDs by physical reads in last 7 days (and I’m reporting data from both RAC nodes in the same list) – you can see the few TOP SQLs are the same as reported in AWR report, but because I’m reporting database-wide statistics instead of instance-wide as AWR does, I have other SQLs on the list too. I’ve also included 2 additional columns:

  • DIFF_PLANS – number of different PLAN_HASH_VALUE values reported for this SQL_ID, and if only one is found – it shows the actual PLAN_HASH_VALUE
  • DIFF_FMS - number of different FORCE_MATCHING_SIGNATURE values reported for this SQL_ID, and if only one is found – it shows the actual FORCE_MATCHING_SIGNATURE

Custom Script - sqlidNow, I can adjust the custom script to aggregate the data by FORCE_MATCHING_SIGNATURE, instead of SQL_ID. I’ll still keep the DIFF_PLANS column and will add a new one – DIFF_SQLID.

Custom Script - fmsThe situation is a little bit different now. Notice how the second row reports FORCE_MATCHING_SIGNATURE  = 0, this typically shows PL/SQL blocks that execute the SQL statements and aggregate statistics from them, so we’re not interested in them. Otherwise the original report by SQL_ID showed quite accurate data in this situation and my suspicions regarding the misuse of literal values where binds should be used, didn’t materialize. Could I be missing anything else? Yes — even the FORCE_MATCHING_SIGNATURE could be misleading in identification of TOP resource consumers, you can write two completely different SQLs (i.e. “select * from dual a” and “select * from dual b”) that will do the same thing and will use the same execution plan. Let’s query the top consumers by PLAN_HASH_VALUE to check this theory!

Custom Script - planI’ve highlighted the third row as the same PLAN_HASH_VALUE is reported for 20 different SQL_IDs, which allowed it to take the third place in the TOP list by physical reads (actually it’s the second place as PLAN_HASH_VALUE=0 is ignorable). The next query expands the third row:

Custom Script - sqlids for planAnd here are All the SQL statements:

All plan sqlsWhat I have here is 20 different  views generated by Oracle Discoverer that query the database by using exactly the same execution plan. Closer look revealed the views included hardcoded query parameters (date intervals for reporting), but in the end, this was the same query! It’s the TOP2 query by physical reads. in the database and if I tune it – all 20 discoverer views will benefit.

I think one of the drawbacks of AWR reports is that it is not able to identify such situations, it would be great if user could choose the column by which he aggregation is done. In the situation I described I was able to identify one of the top queries by physical reads only when I aggregated data by PLAN_HASH_VALUE.

Purge statspack snaps of source db on test db having different DBID?

$
0
0

As most DBAs are aware, the clean deletion of the old statspack snaps is very difficult up to rdbms version 9i. Thanks to the statspack.purge procedure introduced by oracle from 10gR1 version, now it’s possible to purge un-referenced data too. This blog post explains about how to use statspack.purge procedure, but for the test/clone databases which uses different DataBase IDentifier(DBID) compared to the source database. Please remember the steps explained on this blog are not required when the source and test databases have the same DBID.

Normally DBID gets changed on cloned databases during the following scenarios.

1. The most commonly used RMAN ‘duplicate database’ feature to create the test database.

2. The database utility ‘nid’ used to change the test database name and dbid.

3. The test database controlfile gets created using syntax based on the scripts available on text backup controlfile.

Consider you have production/source database is configured to generate statspack snaps once in 20 minutes and the retention were 90 days. When this source database gets cloned using above methods to create test database, it inherits the same behavior. Now the test database contains statspack snaps belongs to source database as well as for the current database too. Even when you modify the existing purge script to retain less snaps, it would valid only for the snaps belong to the current DBID. The snaps belongs to other than current DBID would never get purged by this script, even though they are no longer valid for this test database.

1. Gather the DBID details from stats$snapshot table on the test database.

For example,

SQL> select distinct dbid from stats$snapshot;

DBID
———-
1215068670 ==> This is the source database DBID
393689388 ==> This is the test database DBID

2. Gather the snaps range handled by the source database using the following queries.

For eg:

SQL> select min(snap_id) from stats$snapshot where dbid=1215068670;

MIN(SNAP_ID)
————
90920

SQL> select max(snap_id) from stats$snapshot where dbid=1215068670;

MAX(SNAP_ID)
————
93775

3. Gather the row count on various tables to verify the successful purge activity completion.

For eg:

SQL> select count(1) from stats$snapshot where dbid=1215068670;

COUNT(1)
———-
2211

SQL> select count(1) from stats$sqltext where last_snap_id < 93776;

COUNT(1)
———-
380056

SQL> select count(1) from STATS$STATSPACK_PARAMETER where dbid=1215068670;

COUNT(1)
———-
1

SQL> select count(1) from STATS$UNDOSTAT where snap_id < 93776;

COUNT(1)
———-
4422

4. Copy the $ORACLE_HOME/rdbms/admin/sppurge.sql to your home directory and modify it accordingly.

i) Remove the dbid column from this select statement on the script.

select d.dbid dbid ==> Remove this column being selected.
, d.name db_name
, i.instance_number inst_num
, i.instance_name inst_name
from v$database d,
v$instance i;

ii) Substitute the source database DBID on this location on the script.

begin
:dbid := &dbid; ==> Change the ‘&dbid’ value as 1215068670
:inst_num := &inst_num;
:inst_name := ‘&inst_name’;
:db_name := ‘&db_name’;
end;
/

iii) Change the variable “i_extended_purge” value as ‘true’ on the script.

:snapshots_purged := statspack.purge( i_begin_snap => :lo_snap
, i_end_snap => :hi_snap
, i_snap_range => true
, i_extended_purge => false ==> Change the value as true
, i_dbid => :dbid
, i_instance_number => :inst_num);
end;

5. Execute this custom purge script on the test database and provide the snaps range when requested.

For eg:

Specify the Lo Snap Id and Hi Snap Id range to purge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for losnapid: 90920
Using 90920 for lower bound.

Enter value for hisnapid: 93775
Using 93775 for upper bound.

Deleting snapshots 90920 – 93775.

6. Now logged into the test database and execute the queries to verify the deletion happened.

SQL> select count(1) from stats$snapshot where dbid=1215068670;

COUNT(1)
———-
0

SQL> select count(1) from stats$sqltext where last_snap_id < 93776;

COUNT(1)
———-
7840

SQL> select count(1) from STATS$STATSPACK_PARAMETER where dbid=1215068670;

COUNT(1)
———-
1

SQL> select count(1) from STATS$UNDOSTAT where snap_id < 93776;

COUNT(1)
———-
0

As you noticed, This is very simple action plan, which may require some more modification on the custom purge script when it was used on RAC database.

How To Improve SQL Statements Performance: Using SQL Plan Baselines

$
0
0

The performance of any Oracle database heavily relies on query execution(s). The reasons for any SQL statement’s execution plan changes could include a variety of actions, like gathering optimizer statistics (table, index, schema etc.)  changing optimizer parameters, schema definitions, adding indexes etc.  As experienced Oracle DBAs, we should be aware of the fact that the above mentioned actions meant to improve SQL performance will not always guarantee positive results.

So in this case, many of us would try to freeze execution plans (Stored Outlines) or lock the optimizer statistics. However, doing so prevents such environments/databases of taking advantage of new optimizer functionality or access paths, which would improve the SQL statements performance. That is where SQL Plan Management comes in very handy…

SQL Plan Management (SPM) provides a framework for completely transparent controlled execution plan evolution. With SPM the optimizer automatically manages execution plans and ensures only known or verified plans are used. When a new plan is found for a SQL statement it will not be used until it has been verified by the database to have comparable or better performance than the current plan.

———————-

Next, I will explain the steps for forcing a bad query to use a better execution plan by loading SQL Plans into SPM using AWR.

Identifying the Slow Query

We have the following scenario:

- Oracle 11.2.0.3 version single instance database

- Performance issue caused by the following bad query:

sql1

with initial explain plan :

sql1

As shown in the Plan output, a full table scan was used, resulting in excessive IO for this query.  It seemed this query needed an index to reduce the IO. Therefore I have added two indexes on ‘status’ and ‘prevobjectid’ columns for the EMPLOYEES table, gathered table statistics and then checked again the explain plan. We will see now that due to index creation the DISPLAY_AWR program shows a newly generated explain plan with improved cost using an index range scan versus the full table scan used by the initial plan (Plan hash value: 2172072736).

sql2

Now we have obtained a new, better execution plan in AWR for the SQL statement, but our next question would be, “How can we make sure it will be the only plan picked by the Cost Based Optimizer for future executions”?

The answer:  “Create a SQL Tuning Set for the SQL, then create a new SQL Baseline from the STS so the Optimize will choose the preferred Execution Plan”.

Each time a SQL statement is compiled, the optimizer first uses a cost-based search method to build a best-cost plan, then tries to find a matching plan in the SQL plan baseline.  If a match is found, the optimizer will proceed using this plan. Otherwise, it evaluates the cost of each accepted plan in the SQL plan baseline and selects the plan with the lowest cost.

Here are the steps for loading SQL Plans into SPM using AWR by implementing SQL Baselines for the bad query.

Step 1: Set up a SQL Baseline using known-good plan, sourced from AWR snapshots.

To do so, SQL Plan Management must be active and the easiest condition to checking optimizer_use_sql_plan_baselines which needs to be TRUE.

sql2
Step 2: Create SQL Tuning Set (STS).

SQL tuning set (STS) is a database object that includes one or more SQL statements along with their execution statistics and execution context, and could include a user priority ranking. You can load SQL statements into a SQL tuning set from different SQL sources, such as AWR, the shared SQL area, or customized SQL provided by the user. An STS includes:

-          A set of SQL statements

-          Associated execution context, such as user schema, application module name and action, list of bind values, and the cursor compilation environment

-          Associated basic execution statistics, such as elapsed time, CPU time, buffer gets, disk reads, rows processed, cursor fetches, the number of executions, the number of complete executions, optimizer cost, and the command type

-          Associated execution plans and row source statistics for each SQL statement (optional)

The concept of SQL tuning sets, along with the DBMS_SQLTUNE package to manipulate them, was introduced in Oracle 10g as part of the Automatic SQL Tuning functionality. Oracle 11g makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:

  • Database, operating system, or hardware upgrades.
  • Database, operating system, or hardware configuration changes.
  • Database initialization parameter changes.
  • Schema changes, such as adding indexes or materialized views.
  • Refreshing optimizer statistics.
  • Creating or changing SQL profiles.

Now I create a SQL Tuning Set based on the slow query with a SQL_ID of 9kt723m2u5vna.

sql1

Step 3: Populate STS from AWR.

Now I will locate the AWR snapshots required to populate the STS, and load the STS based on those snapshot ID’s and the SQL_ID.

sql1

Step 4: List out SQL Tuning Set contents.

Now I can query the STS to verify it contains the expected data.

sql1

Step 5: List out SQL Tuning Set contents

Though I have created and verified the STS, the Baseline has not yet been created.

sql1

Step 6: Load desired plan from STS as SQL Plan Baseline

Now I will load the known good plan that uses the newly created index into the Baseline.

sql1

Step 7: List out the Baselines again.

Now verify the Baseline contains the desired plan.

sql1

Step 8. Flush the current bad SQL Plan.

After loading the baseline, the current cursor must be flushed from the cache to make sure the new plan will be used on next execution of the sql_id 9kt723m2u5vna

sql1

Conclusion

As this blog post demonstrates, SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements and baselines seem to be a definite step in the right direction. Baselines can be captured from multiple sources, SPM allowing new plans to be used if they perform better than the baseline fact that could improve the overall application/system functionality.

2013 Year in Review – Oracle E-Business Suite

$
0
0

Here are the Top 5 things in Oracle E-Business Suite world that will have major impact in 2014 and beyond.

1. Oracle E-Business Suite 12.2 Now Available

2013 started on a low note in Oracle E-Business Suite (EBS) World. Many people were expecting some announcement related to upcoming EBS release during Openworld 2012. But all they got it is a extension of support deadline  for existing 11i EBS customers. Oracle finally announced Oracle EBS R12.2 few days before Openworld 2013. This releases packs exciting features like Online Patching, which elevates Oracle E-Business Suite ranking in ERP systems domain. Online Patching will enable large multi-national customers consolidate their ERP systems in different Countries into one Single Global Oracle E-Business Suite instance, as it cuts down the downtime required for patching maintenance window to all most nil.  This is a big plus point for clients who cannot afford downtime because their user base is spread all over the world. 2014 will be a years of upgrades to R12.2 for all clients.

2. 12.1.0.1 Database Certified with Oracle E-Business Suite

Around the same time as R12.2 announcement, Oracle certified 12c Database with Oracle EBS. The good news here is they certified Oracle 11i also with 12c Database. This should give EBS clients option to get onto newest version of Oracle Database and take advantage of the new features of 12c database. The effort involved in upgrading database is significantly less than upgrading to newer version of EBS. So i believe many customers will uptake 12c database upgrade before the R12.2 EBS upgrade. Also upgrading database to newer version earlier than EBS, will save some hours during future R12.2 upgrade downtime window.

3. E-Business Suite Support Timelines Updated at OpenWorld 2013

Oracle once again extended the support timelines for 11i customers. They named it as Exception support and it ends on December 2015. During this Exception support period, Oracle will primarily provide fixes for Sev 1 issues and Security patches. So this gives 2 years of additional time to Customers on 11i to migrate to latest R12.2. With typical R12 upgrades taking around 1 year time, The sooner you plan and start your R12.2 migration the better.

4. No to Third-Party Tools to Modify Your EBS Database

Oracle Development warned officially in their blog about use third party tools to modify, archive & purge data in Oracle E-Business suite. Managing data growth in Oracle EBS is a known problem. Now Oracle wants customers to use Oracle Database technologies like ILM, Advanced Compression and Partitioning, to archive the data instead of using third party utilities. Note that all these database features will cost customers additional money in licensing costs. So get your bargaining hat on with your Oracle Account Manager and score some discounts using this oracle Achilles heel namely EBS purging and archiving data.

5. Sign E-Business Suite JAR Files Now

Do you remember the days when Oracle EBS moved from Oracle Jinitiator to Sun JRE for oracle forms? Then be prepared for one more similar thing around oracle forms. With stream of viruses and malware that exploit bugs in Oracle/Sun JRE made Oracle to tighten security around Oracle JRE. Its now required to sign forms jar files with a real certificate. In future releases of Oracle JRE7, Unsigned Oracle forms will stop working completely. So customers caught unaware of this will be in for big trouble with user complaints.

Creating a single-node EBS 12.1.3 Vision instance from OVM templates

$
0
0

“Seriously John, do you blog about anything else?”

Yeah, well… Evidence is strongly against me so far. :)

One of the more common questions I’ve received as a followup to my Build an E-Business Suite 12.1.3 Sandbox In VirtualBox in One Hour post has been, “Can I do this in a single node instead of creating two VMs?” The answer of course, is yes, but it never seemed like a good topic for a full blog post. Given the number of requests, however (and the patience and persistence of one reader in particular — hi Sandip!), I’m using this post to present quick notes on how to create a single-node EBS 12.1.3 Vision instance from the OVM templates, instead of the two-node system for which they’re designed.

In addition to the normal complete-lack-of-support caveats listed in the original post, please be aware that this post contains pointers and rough notes, not detailed instructions. Basically, I’ve just wrapped some formatting around some notes from a presentation I gave on this topic last summer. If you don’t understand what’s happening in the original set of instructions, these notes will not be useful to you at all. Please read the original post carefully before asking questions about this one.

System specs

Since we’re running apps and the database in a single node we need to configure a slightly more powerful single VM. Here’s partial output from ‘vboxmanage showvminfo’ that illustrates the important points (more memory, more CPU, and an extra disk for the Apps software). Otherwise, the configuration (network interfaces, rescue boot image setup, etc) is the same as in the original post.

Memory size: 3072MB
Number of CPUs: 2
Storage Controller Name (1): SATA
Storage Controller Type (1): IntelAhci
SATA (0, 0): /Volumes/Valen/OVM_1213/EBS121RootDisk.vdi (UUID: ebd87cd3-2620-49b6-b24d-c64158b183da)
SATA (1, 0): /Volumes/Valen/OVM_1213/EBS121DB.vdi (UUID: 0ae2f4dc-bd40-4299-82f7-eebea2c34de7)
SATA (2, 0): /Volumes/Valen/OVM_1213/EBS121Apps.vdi (UUID: 7fc14a42-f4bc-4741-8ba7-a33341ac73ea)

Still the same

The following steps are almost the same as in the original post:

  1. Download the software
  2. Extract the templates
  3. Convert the disk images to .vdi format (though you can skip the Apps server System.img disk, you won’t need it, only ebs1211apps.img). Of course, you’ll only need to create 1 VM at this step, attaching the Apps vdi as the third disk.
  4. Boot the database server VM in rescue mode from the install CD — the steps to install the new kernel and run mkinitrd remain the same

Things change a bit before moving on to step 5, “Reboot and prepare for next steps,” as described below.

What’s different?

Apart from the obvious “no second VM to create,” here are the essential changes I made to my build process for a single-node Vision instance:

  • Before rebooting, add another line to /etc/fstab to attach the apps software volume:
    /dev/sdc1 /u02 ext3 defaults 1 0
  • Before rebooting, do not edit the /etc/sysconfig/oraclevm-template script. I found it to be easier to just let the script execute at boot time, although it did require me to be a bit more careful about my inputs.
  • After rebooting, the template configuration script will guide you through the configuration of the network interfaces and the Vision database tier, as described in the original post

Once the database is started, you’ll need to make a few changes to the scripts that configure, start, and stop the applications tier. First, log in to the VM as root, and then adjust the scripts to account for the new mount point. To save your sanity, it’s also necessary to comment out ovm_configure_network from the ebiz_1211_reconfig.sh script:

# cd /u02
# perl -pi.old -e 's/u01/u02/g' startapps.sh stopapps.sh ebiz_1211_reconfig.sh
# vi ebiz_1211_reconfig.sh
# diff ebiz_1211_reconfig.sh ebiz_1211_reconfig.sh.old
47c47
< #ovm_configure_network "static"
---
> ovm_configure_network "static"
61c61
< su oracle -c "perl /u02/E-BIZ/apps/apps_st/comn/clone/bin/adcfgclone.pl appsTier"
---
> su oracle -c "perl /u01/E-BIZ/apps/apps_st/comn/clone/bin/adcfgclone.pl appsTier"

After the scripts have been adjusted, you’re ready to configure the apps tier. Again, as root, run the /u02/ebiz_1211_reconfig.sh script, which will invoke AutoConfig and ask you all the necessary questions. Your answers will differ from the two-node process in two important ways:

  1. There is only one hostname for the environment now
  2. All references to the apps software locations will point to /u02, not /u01

Here’s an excerpt of the Autoconfig run, with only the important/changed bits included:

 cd /u02
[root@gkar u02]# ./ebiz_1211_reconfig.sh
Configuring Oracle E-Business Suite...

Target System Hostname (virtual or normal) [gkar] :

Target System Database SID : VIS

Target System Database Server Node [gkar] :

Target System Database Domain Name [local.org] :

Target System Base Directory : /u02/E-BIZ

Target System Tools ORACLE_HOME Directory [/u02/E-BIZ/apps/tech_st/10.1.2] :

Target System Web ORACLE_HOME Directory [/u02/E-BIZ/apps/tech_st/10.1.3] :

Target System APPL_TOP Directory [/u02/E-BIZ/apps/apps_st/appl] :

Target System COMMON_TOP Directory [/u02/E-BIZ/apps/apps_st/comn] :

Target System Instance Home Directory [/u02/E-BIZ/inst] :

Do you want to preserve the Display [atgtxk-09:0.0] (y/n)  : n

Target System Display [gkar:0.0] :

Do you want the the target system to have the same port values as the source system (y/n) [y] ? : n

Target System Port Pool [0-99] : 42

UTL_FILE_DIR on database tier consists of the following directories.

1. /usr/tmp
2. /usr/tmp
3. /u01/E-BIZ/db/tech_st/11.2.0.2/appsutil/outbound/VIS_gkar
4. /usr/tmp
Choose a value which will be set as APPLPTMP value on the target node [1] : 1

Do you want to startup the Application Services for VIS? (y/n) [y] :  y

Cleanup items and other reminders

To prevent annoyances when starting/stopping services, and logging in as oracle:

  • touch /home/oracle/.passchanged
  • rm /u02/E-BIZ/apps/apps_st/appl/*mydb*

Also, since our root disk came from the database server VM template, only database services will stop and start automatically upon server shutdown and boot. You will need to use the startapps.sh and stopapps.sh scripts in /u02 to manage the applications tier services.

That should be enough to get you going. Good luck!


What Happens When Active DB Duplication Goes Wrong?

$
0
0

There are many blog posts out there about active database duplication. However, they were all tested in an ideal environment or condition. What happens when a tablespace is created during the middle of active duplication and how to resolve the error? Read on if you would like to know.

For my test case, I created database db01 using OMF and will perform active duplication to db02 using OMF as well on the same host. While duplication was running, I created a new tablespace. Here are the details of the steps performed:

Review files and pfile for TARGET database:

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ ll
total 7692
-rw-rw----. 1 oracle oinstall    1544 Feb 14 13:06 hc_db01.dat
-rwxr-x---. 1 oracle oinstall     590 Feb 13 08:14 initdb01.ora
-rwxr-x---. 1 oracle oinstall     590 Feb 13 08:14 initdb02.ora
-rw-r-----. 1 oracle oinstall      24 Feb 13 08:18 lkDB01
-rw-r-----. 1 oracle oinstall       0 Feb 14 13:06 lkinstdb01
-rw-r-----. 1 oracle oinstall    2048 Feb 11 14:00 orapwdb01
-rw-r-----. 1 oracle oinstall    2048 Feb 11 15:48 orapwdb02
-rw-r-----. 1 oracle oinstall 7847936 Feb 11 18:06 snapcf_db01.f
-rw-r-----. 1 oracle oinstall    3584 Feb 14 13:05 spfiledb01.ora

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ cat initdb02.ora
*.audit_file_dest='/u01/app/oracle/admin/adump'
*.audit_trail='none'
*.compatible='11.2.0.4.0'
*.db_block_size=8192
*.db_create_file_dest='/oradata'
*.db_domain=''
*.db_name='db02'
*.db_recovery_file_dest='/oradata/fra'
*.db_recovery_file_dest_size=4g
*.diagnostic_dest='/u01/app/oracle'
*.event='10795 trace name context forever, level 2'
*.fast_start_mttr_target=300
*.java_pool_size=0
*.local_listener='(ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1531))'
*.pga_aggregate_target=268435456
*.processes=100
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=805306368
*.undo_tablespace='UNDOTBS'

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ diff ./initdb01.ora ./initdb02.ora
7c7
< *.db_name='db01' --- > *.db_name='db02'
[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$

Notice there is only one difference between in the pfile for db01 and db02

Create same directory structures for TARGET database:

[oracle@arrow:]/oradata
$ ls DB*
DB01:
controlfile  datafile  onlinelog

DB02:
controlfile  datafile  onlinelog
[oracle@arrow:]/oradata
$

Startup NOMOUNT TARGET database:

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 14 13:11:13 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@db02> startup nomount;
ORACLE instance started.

Total System Global Area  801701888 bytes
Fixed Size                  2257520 bytes
Variable Size             222301584 bytes
Database Buffers          570425344 bytes
Redo Buffers                6717440 bytes
SYS@db02>

Start active database duplication:

[oracle@arrow:db01]/media/sf_linux_x64/rman
$ rman @dupdbomf.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Feb 14 13:12:04 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target *
2> connect auxiliary *
3> run {
4> allocate channel c1 type disk maxopenfiles 1;
5> allocate auxiliary channel a1 type disk;
6> duplicate target database to db02
7>   from active database nofilenamecheck
8>   spfile
9>   PARAMETER_VALUE_CONVERT ('DB01','DB02')
10> ;
11> }
12> exit;
connected to target database: DB01 (DBID=1470673955)

connected to auxiliary database: DB02 (not mounted)

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=14 device type=DISK

allocated channel: a1
channel a1: SID=96 device type=DISK

Starting Duplicate Db at 14-FEB-2014 13:12:07

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb01.ora' auxiliary format
 '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora'   ;
   sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''";
}
executing Memory Script

Starting backup at 14-FEB-2014 13:12:08
Finished backup at 14-FEB-2014 13:12:09

sql statement: alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''duplicate'' scope=spfile";
   sql clone "alter system set  control_files =
 ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''duplicate'' scope=spfile

sql statement: alter system set  control_files =  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.c                                                                              tl'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''DB01'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''DB02'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   backup as copy current controlfile auxiliary format  '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   restore clone controlfile to  '/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl' from
 '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''DB01'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''DB02'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

Starting backup at 14-FEB-2014 13:12:55
channel c1: starting datafile copy
copying current control file
output file name=/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/snapcf_db01.f tag=TAG20140214T131255 RECID=1 STAMP=839509978
channel c1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 14-FEB-2014 13:13:03

Starting restore at 14-FEB-2014 13:13:03

channel a1: copied control file copy
Finished restore at 14-FEB-2014 13:13:04

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

database mounted

contents of Memory Script:
{
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   backup as copy reuse
   datafile  1 auxiliary format new
   datafile  2 auxiliary format new
   datafile  3 auxiliary format new
   datafile  4 auxiliary format new
   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 14-FEB-2014 13:13:21

----------------------------------------------------------------------
-- While duplication was running, create new tablespace at source
--
[oracle@arrow:db01]/home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 14 13:13:22 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@db01):PRIMARY> create tablespace mdinh;

Tablespace created.

ARROW:(SYS@db01):PRIMARY>
----------------------------------------------------------------------

channel c1: starting datafile copy
input datafile file number=00001 name=/oradata/DB01/datafile/o1_mf_system_9hsw4shz_.dbf
output file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:01:26
channel c1: starting datafile copy
input datafile file number=00002 name=/oradata/DB01/datafile/o1_mf_sysaux_9hsw63d2_.dbf
output file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:00:35
channel c1: starting datafile copy
input datafile file number=00003 name=/oradata/DB01/datafile/o1_mf_undotbs_9hsw75h4_.dbf
output file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:00:35
channel c1: starting datafile copy
input datafile file number=00004 name=/oradata/DB01/datafile/o1_mf_users_9hsw880k_.dbf
output file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:00:35
Finished backup at 14-FEB-2014 13:16:32

sql statement: alter system archive log current

contents of Memory Script:
{
   backup as copy reuse
   archivelog like  "/oradata/fra/DB01/archivelog/2014_02_14/o1_mf_1_10_9hx1xkn1_.arc" auxiliary format
 "/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_%u_.arc"   ;
   catalog clone recovery area;
   switch clone datafile all;
}
executing Memory Script

Starting backup at 14-FEB-2014 13:16:34
channel c1: starting archived log copy
input archived log thread=1 sequence=10 RECID=2 STAMP=839510193
output file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 14-FEB-2014 13:16:35

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc

List of Files Which Where Not Cataloged
=======================================
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
  RMAN-07518: Reason: Foreign database file DBID: 1470537681  Database Name: DB01

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=839510196 file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=2 STAMP=839510196 file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=3 STAMP=839510197 file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=4 STAMP=839510197 file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf

contents of Memory Script:
{
   set until scn  227291;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 14-FEB-2014 13:16:39
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 02/14/2014 13:17:01
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06094: datafile 5 must be restored

Recovery Manager complete.
[oracle@arrow:db01]/media/sf_linux_x64/rman
$

Remove spfile and misc files for TARGET database:

Startup NOMOUNT TARGET database:

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ rm spfiledb02.ora lkDB02 hc_db02.dat
[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 14 13:18:43 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@db02> startup nomount;
ORACLE instance started.

Total System Global Area  801701888 bytes
Fixed Size                  2257520 bytes
Variable Size             222301584 bytes
Database Buffers          570425344 bytes
Redo Buffers                6717440 bytes
SYS@db02> exit

RESTART active database duplication:

[oracle@arrow:db01]/media/sf_linux_x64/rman
$ rman @dupdbomf.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Feb 14 13:18:52 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target *
2> connect auxiliary *
3> run {
4> allocate channel c1 type disk maxopenfiles 1;
5> allocate auxiliary channel a1 type disk;
6> duplicate target database to db02
7>   from active database nofilenamecheck
8>   spfile
9>   PARAMETER_VALUE_CONVERT ('DB01','DB02')
10> ;
11> }
12> exit;
connected to target database: DB01 (DBID=1470673955)

connected to auxiliary database: DB02 (not mounted)

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=14 device type=DISK

allocated channel: a1
channel a1: SID=10 device type=DISK

Starting Duplicate Db at 14-FEB-2014 13:18:54

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb01.ora' auxiliary format
 '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora'   ;
   sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''";
}
executing Memory Script

Starting backup at 14-FEB-2014 13:18:54
Finished backup at 14-FEB-2014 13:18:55

sql statement: alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''duplicate'' scope=spfile";
   sql clone "alter system set  control_files =
 ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''duplicate'' scope=spfile

sql statement: alter system set  control_files =  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''DB01'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''DB02'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   backup as copy current controlfile auxiliary format  '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   restore clone controlfile to  '/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl' from
 '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''DB01'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''DB02'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

Starting backup at 14-FEB-2014 13:19:11
channel c1: starting datafile copy
copying current control file
output file name=/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/snapcf_db01.f tag=TAG20140214T131912 RECID=2 STAMP=839510353
channel c1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 14-FEB-2014 13:19:15

Starting restore at 14-FEB-2014 13:19:15

channel a1: copied control file copy
Finished restore at 14-FEB-2014 13:19:16

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

database mounted

Using previous duplicated file /oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf for datafile 1 with checkpoint SCN of 226956
Using previous duplicated file /oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf for datafile 2 with checkpoint SCN of 227250
Using previous duplicated file /oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf for datafile 3 with checkpoint SCN of 227262
Using previous duplicated file /oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf for datafile 4 with checkpoint SCN of 227275

contents of Memory Script:
{
   set newname for datafile  1 to
 "/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf";
   set newname for datafile  2 to
 "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf";
   set newname for datafile  3 to
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf";
   set newname for datafile  4 to
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf";
   set newname for clone datafile  5 to new;

   backup as copy reuse
   datafile  5 auxiliary format new
   ;
   
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 14-FEB-2014 13:19:31
channel c1: starting datafile copy
input datafile file number=00005 name=/oradata/DB01/datafile/o1_mf_mdinh_9hx1qqko_.dbf
output file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf tag=TAG20140214T131931
channel c1: datafile copy complete, elapsed time: 00:00:15
Finished backup at 14-FEB-2014 13:19:47

sql statement: alter system archive log current

contents of Memory Script:
{
   backup as copy reuse
   archivelog like  "/oradata/fra/DB01/archivelog/2014_02_14/o1_mf_1_10_9hx1xkn1_.arc" auxiliary format
 "/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_%u_.arc"   archivelog like
 "/oradata/fra/DB01/archivelog/2014_02_14/o1_mf_1_11_9hx23s3n_.arc" auxiliary format
 "/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_%u_.arc"   ;
   catalog clone recovery area;
   catalog clone datafilecopy  "/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf",
 "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf",
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf",
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf";
   switch clone datafile  1 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf";
   switch clone datafile  2 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf";
   switch clone datafile  3 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf";
   switch clone datafile  4 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf";
   switch clone datafile all;
}
executing Memory Script

Starting backup at 14-FEB-2014 13:19:53
channel c1: starting archived log copy
input archived log thread=1 sequence=10 RECID=2 STAMP=839510193
output file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_09p0jqbq_.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
channel c1: starting archived log copy
input archived log thread=1 sequence=11 RECID=3 STAMP=839510393
output file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 14-FEB-2014 13:19:56

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_09p0jqbq_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_09p0jqbq_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc

List of Files Which Where Not Cataloged
=======================================
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
  RMAN-07518: Reason: Foreign database file DBID: 1470537681  Database Name: DB01

cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf RECID=2 STAMP=839510398
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf RECID=3 STAMP=839510398
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf RECID=4 STAMP=839510399
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf RECID=5 STAMP=839510399

datafile 1 switched to datafile copy
input datafile copy RECID=2 STAMP=839510398 file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf

datafile 2 switched to datafile copy
input datafile copy RECID=3 STAMP=839510398 file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf

datafile 3 switched to datafile copy
input datafile copy RECID=4 STAMP=839510399 file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf

datafile 4 switched to datafile copy
input datafile copy RECID=5 STAMP=839510399 file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf

datafile 5 switched to datafile copy
input datafile copy RECID=6 STAMP=839510401 file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf

contents of Memory Script:
{
   set until scn  227620;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 14-FEB-2014 13:20:02

starting media recovery

archived log for thread 1 with sequence 10 is already on disk as file /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
archived log for thread 1 with sequence 11 is already on disk as file /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc
archived log file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc thread=1 sequence=10
archived log file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc thread=1 sequence=11
media recovery complete, elapsed time: 00:00:09
Finished recover at 14-FEB-2014 13:20:14
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DB02" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      2
  MAXDATAFILES       30
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP   1  SIZE 100 M ,
  GROUP   2  SIZE 100 M ,
  GROUP   3  SIZE 100 M
 DATAFILE
  '/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf'
 CHARACTER SET AL32UTF8

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf",
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf",
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf",
 "/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /oradata/DB02/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf RECID=1 STAMP=839510428
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf RECID=2 STAMP=839510428
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf RECID=3 STAMP=839510428
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf RECID=4 STAMP=839510429

datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=839510429 file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=5 STAMP=839510430 file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=839510431 file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=7 STAMP=839510432 file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 14-FEB-2014 13:21:48
released channel: c1
released channel: a1

Recovery Manager complete.
[oracle@arrow:db01]/media/sf_linux_x64/rman
$

Did you noticed duplication reused previous duplicated file versus duplicating it again?

Nothing to Blog About — Think Again!

$
0
0

How often do you run off to your favourite sites and devour information put together by others? It’s quite remarkable how hard stuff is until you’ve done it one or more times. The seemingly insurmountable task becomes second nature once mastered. Hey Alex Gorbachev or Jonathan Lewis were once beginners just like you and me. In the days when I got started there was no internet, no Twitter, no Metalink (MOS), and little, if no email. We used the good old-fashioned phone … a traditional landline at that. They used to have  round apparatus with holes in them called “dials.”

Something you have done may be a mystery to others and the seemingly most menial tasks (to you) may be like a vertical wall to others. Think back to 10 things you have done in the past few weeks. Estimate the number of those 10 things that would be “news” to others… Got that right… All 10. We owe it to others to blog about what we do on a daily basis and go out of our ways to find time to educate the masses.

To shed some light on an example that went down in the early to mid ’90s, picture the following (purveyors of SQL*Forms 2 and 3 may remember this). Triggers were used in a way similar to they are today. Events occurred as we moved around a screen (character-based at that :)). A common trigger was called POST-CHANGE and I became frustrated as we went to Forms 3 remembering that this trigger would not fire until the cursor left a field. I needed a way to execute a trigger while the cursor still resided in a field. Along comes a developer with 3 months experience. She suggests coding an ON-VALIDATE-FIELD trigger. Swell I said to myself knowing well that this trigger as well would not fire until the cursor left the field. So as not to offend here, I did just that. She also suggested placing the text “ENTER;” in the trigger code and all would proceed exactly as hoped.

I tried it out after chuckling to myself, based on what I already knew about Forms and it WORKED. There is the rub… No matter how little you may know, your expertise may lie in a corner of technology, others have not experienced yet. Your experiences are valuable to others and it is your obligation to blog. Nothing to blog about — think again.

Automating DataPump Export

$
0
0

What’s the most elaborate thing you have done with DataPump?

So there I was, given the requirement to export multiple partitions for multiple tables where each partition has its own dump file having the format “tablename_partitionanme.dmp”, pondering how this can be done efficiently.

With the following metadata and requirements, what approach would you take?

If you are curious about the I approach I used, then read on.

TABLE_OWNER                    TABLE_NAME                     PARTITION_NAME
------------------------------ ------------------------------ ------------------------------
MDINH                          A_TAB                          P001
MDINH                          A_TAB                          P002
MDINH                          A_TAB                          P003
MDINH                          A_TAB                          P004
MDINH                          A_TAB                          P005
MDINH                          B_TAB                          P001
MDINH                          B_TAB                          P002
MDINH                          B_TAB                          P003
MDINH                          B_TAB                          P004
MDINH                          B_TAB                          P005

Here’s the demo:

$ nohup sqlplus "/ as sysdba" @exp_api.sql > exp_api.log 2>&1 &

$ cat exp_api.log
nohup: ignoring input

SQL*Plus: Release 11.2.0.4.0 Production on Wed Feb 26 20:28:07 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@db01):PRIMARY> -- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1)
ARROW:(SYS@db01):PRIMARY> -- Work around for the above mentioned error
ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context forever, level 32';

System altered.

Elapsed: 00:00:00.00
ARROW:(SYS@db01):PRIMARY> declare
  2      h1 number;
  3      dir_name varchar2(30);
  4  begin
  5      dir_name := 'DPDIR';
  6      for x in (
  7          select table_owner, table_name, partition_name
  8          from   dba_tab_partitions
  9          where  table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$')
 10          order  by table_owner, table_name, partition_position
 11      ) loop
 12
 13          h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE');
 14
 15          dbms_datapump.add_file (
 16              handle    => h1,
 17              filename  => x.table_name||'_'||x.partition_name||'.dmp',
 18              reusefile => 1,
 19              directory => dir_name,
 20              filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
 21
 22          dbms_datapump.add_file (
 23              handle    => h1,
 24              filename  => 'exp_'||x.table_name||'_'||x.partition_name||'.log',
 25              directory => dir_name,
 26              filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
 27
 28          dbms_datapump.set_parameter (
 29              handle => h1,
 30              name   => 'INCLUDE_METADATA',
 31              value  => 0);
 32
 33          dbms_datapump.metadata_filter (
 34              handle => h1,
 35              name   => 'SCHEMA_EXPR',
 36              value  => 'IN ('''||x.table_owner||''')');
 37
 38          dbms_datapump.metadata_filter (
 39              handle => h1,
 40              name   => 'NAME_EXPR',
 41              value  => 'IN ('''||x.table_name||''')');
 42
 43          dbms_datapump.data_filter (
 44              handle      => h1,
 45              name        => 'PARTITION_LIST',
 46              value       => x.partition_name,
 47              table_name  => x.table_name,
 48              schema_name => x.table_owner);
 49
 50          dbms_datapump.start_job (handle => h1);
 51          dbms_datapump.detach (handle => h1);
 52      end loop;
 53  end;
 54  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.92
ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context off';

System altered.

Elapsed: 00:00:00.00
ARROW:(SYS@db01):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Review export log:

$ ls -l exp*.log-rw-r--r--. 1 oracle oinstall 2888 Feb 26 20:28 exp_api.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P001.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P002.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P003.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P004.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P001.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P002.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P003.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P004.log

Review export dump:

$ ls -l *.dmp
-rw-r-----. 1 oracle oinstall   90112 Feb 26 20:28 A_TAB_P001.dmp
-rw-r-----. 1 oracle oinstall   98304 Feb 26 20:28 A_TAB_P002.dmp
-rw-r-----. 1 oracle oinstall  188416 Feb 26 20:28 A_TAB_P003.dmp
-rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 A_TAB_P004.dmp
-rw-r-----. 1 oracle oinstall   90112 Feb 26 20:28 B_TAB_P001.dmp
-rw-r-----. 1 oracle oinstall   98304 Feb 26 20:28 B_TAB_P002.dmp
-rw-r-----. 1 oracle oinstall  188416 Feb 26 20:28 B_TAB_P003.dmp
-rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 B_TAB_P004.dmp

Review job status:

$ grep "successfully completed" exp*.log
exp_api.log:PL/SQL procedure successfully completed.
exp_A_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Feb 26 20:28:09 2014 elapsed 0 00:00:01
exp_A_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_03" successfully completed at Wed Feb 26 20:28:10 2014 elapsed 0 00:00:02
exp_A_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_04" successfully completed at Wed Feb 26 20:28:11 2014 elapsed 0 00:00:02
exp_A_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at Wed Feb 26 20:28:13 2014 elapsed 0 00:00:02
exp_B_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02
exp_B_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_07" successfully completed at Wed Feb 26 20:28:16 2014 elapsed 0 00:00:02
exp_B_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_08" successfully completed at Wed Feb 26 20:28:17 2014 elapsed 0 00:00:03
exp_B_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_09" successfully completed at Wed Feb 26 20:28:19 2014 elapsed 0 00:00:02

Review exported partition:

$ grep "exported" exp*.log
exp_A_TAB_P001.log:. . exported "MDINH"."A_TAB":"P001"                      6.351 KB       9 rows
exp_A_TAB_P002.log:. . exported "MDINH"."A_TAB":"P002"                      14.89 KB      90 rows
exp_A_TAB_P003.log:. . exported "MDINH"."A_TAB":"P003"                      101.1 KB     900 rows
exp_A_TAB_P004.log:. . exported "MDINH"."A_TAB":"P004"                      963.3 KB    9000 rows
exp_B_TAB_P001.log:. . exported "MDINH"."B_TAB":"P001"                      6.351 KB       9 rows
exp_B_TAB_P002.log:. . exported "MDINH"."B_TAB":"P002"                      14.89 KB      90 rows
exp_B_TAB_P003.log:. . exported "MDINH"."B_TAB":"P003"                      101.1 KB     900 rows
exp_B_TAB_P004.log:. . exported "MDINH"."B_TAB":"P004"                      963.3 KB    9000 rows

Example of completed log:

$ cat exp_B_TAB_P001.log
Starting "SYS"."SYS_EXPORT_TABLE_06":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
. . exported "MDINH"."B_TAB":"P001"                      6.351 KB       9 rows
Master table "SYS"."SYS_EXPORT_TABLE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_06 is:
  /tmp/B_TAB_P001.dmp
Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02

SQL Scripts:

exp_api.sql:

set timing on echo on
-- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1)
-- Work around for the above mentioned error
alter system set events '10298 trace name context forever, level 32';
declare
    h1 number;
    dir_name varchar2(30);
begin
    dir_name := 'DPDIR';
    for x in (
        select table_owner, table_name, partition_name
        from   dba_tab_partitions
        where  table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$')
        order  by table_owner, table_name, partition_position
    ) loop

        h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE');

        dbms_datapump.add_file (
            handle    => h1,
            filename  => x.table_name||'_'||x.partition_name||'.dmp',
            reusefile => 1, -- REUSE_DUMPFILES=Y
            directory => dir_name,
            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

        dbms_datapump.add_file (
            handle    => h1,
            filename  => 'exp_'||x.table_name||'_'||x.partition_name||'.log',
            directory => dir_name,
            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

        -- CONTENT = DATA_ONLY    
        dbms_datapump.set_parameter (
            handle => h1,
            name   => 'INCLUDE_METADATA',
            value  => 0);

        dbms_datapump.metadata_filter (
            handle => h1,
            name   => 'SCHEMA_EXPR',
            value  => 'IN ('''||x.table_owner||''')');

        dbms_datapump.metadata_filter (
            handle => h1,
            name   => 'NAME_EXPR',
            value  => 'IN ('''||x.table_name||''')');

        dbms_datapump.data_filter (
            handle      => h1,
            name        => 'PARTITION_LIST',
            value       => x.partition_name,
            table_name  => x.table_name,
            schema_name => x.table_owner);

        dbms_datapump.start_job (handle => h1);
        dbms_datapump.detach (handle => h1);
    end loop;
end;
/
alter system set events '10298 trace name context off';
exit

Reference:

DBMS_DATAPUMP

What is the difference between logical and physical corruption in Oracle?

$
0
0

When we talk about logical corruption, there are two different failure states that fall under this label:

  1. Accidental or incorrect modification of application data by a user or application.

    In this scenario, a user or application, either by misadventure or resulting from an application bug, changes data in a database to incorrect or inappropriate values. An example would be an engineer who performs an update, but forgets to formulate the predicate such that it updates only a single record, and instead accidentally updates (and commits) changes to thousands of records. When we perform an assessment of a client’s systems, we look carefully at how the client is managing retention of database undo data, archived redo logs and the recycle bin. Many clients assume that physical backups serve all aspects of recoverability for Oracle. On the contrary, effective management of these components can greatly reduce the complexity, RPO and RTO in repairing this type of fault.

  2. Logical (and physical) corruption of data blocks. Block corruptions come in two types:

    Physical corruptions (media corrupt blocks) are blocks that have sustained obvious physical damage. When Oracle detects an inconsistency between the CSN in the block header and the CSN in the block footer, or the expected header and footer structures are not present or are mangled, then the Oracle session raises an exception upon read of the block (ORA-01578: ORACLE data block corrupted…). The call to Oracle fails, and the exception is written to the Oracle alert log and trace files. Physical corruptions are generally the result of infrastructure problems, and can be introduced in a variety of ways. Some possible sources of physical corruption are storage array cache corruption, array firmware bugs, filesystem bugs and array controller battery failure combined with power outage. One can imagine at least a dozen other possible sources of such corruption. Physically corrupt blocks can be repaired using Oracle Recovery Manager’s BLOCKRECOVER command. This operation restores and recovers the block in place in the file without interrupting any other sessions operating against the database.

    Logically corrupt blocks are blocks that have good header and footer CSNs, but that have some other kind of internal inconsistency. For instance, one of the block header structures, which tracks the number of locks associated with rows in the block, differs from the actual number of locks present. Another example would be if the header information on available space differs from the true available space on the block. Upon encountering these types of faults, the calling session generally will raise ORA-00600 (“internal error”) with additional arguments that allow us to diagnose the specific type of defect, and the call will fail. The exception will be written to the alert log and trace files. Like physical corruption, there are a wide range of possible ways that the fault could have been introduced, including all of the ways listed above for physical corruption. However, logically corrupt blocks are much more likely to have been introduced as a result of a failure in the Oracle software, or as a result of an Oracle bug or cache corruption.

    By default, Oracle has features that seek to perform sanity checks on blocks before they are written. However, for highly risk-averse enterprises, additional checks, including checks for logical inconsistencies and block checksum verification can be enabled. These features consume additional resources, so should be used judiciously.

When RMAN Validate Creates New Files

$
0
0

While doing some testing I found something happening with RMAN that was unexpected.

After making an RMAN backup, I would run the VALIDATE RECOVERY FILES command.

When it completed I found there were twice as many backup files as when I started.

Please note that this is Oracle 11.2.0.3 – that will be important later on.

Here is the list of current backup files:

RMAN crosscheck backup;
 using channel ORA_DISK_1
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup RECID=112 STAMP=842454367
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup RECID=113 STAMP=842454432
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup RECID=114 STAMP=842454556
 Crosschecked 3 objects

Following are some pertinent parameters:

12:46:52 SYS@js01 AS SYSDBA show parameter db_recovery_file_dest

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest		     string	 /u01/app/oracle/fra
db_recovery_file_dest_size	     big integer 4G

12:47:00 SYS@js01 AS SYSDBA show parameter log_archive_dest_1

NAME				     TYPE	 VALUE
------------------------------------ ----------- ----------------------------------
log_archive_dest_1		     string	 LOCATION=USE_DB_RECOVERY_FILE_DEST

Now see what happens when VALIDATE RECOVERY FILES is run.
Listings may be edited for brevity.

RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any datafile copy in the repository
channel ORA_DISK_1: starting validation of archived log
channel ORA_DISK_1: specifying archived log(s) for validation
input archived log thread=1 sequence=1212 RECID=581 STAMP=842454820
input archived log thread=1 sequence=1213 RECID=582 STAMP=842454821
...
input archived log thread=1 sequence=1232 RECID=601 STAMP=842531265
input archived log thread=1 sequence=1233 RECID=602 STAMP=842531265
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01

List of Archived Logs
=====================
Thrd Seq     Status Blocks Failing Blocks Examined Name
---- ------- ------ -------------- --------------- ---------------
1    1212    OK     0              97494           /u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1212_9lgwwng0_.arc
1    1213    OK     0              97494           /u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1213_9lgwwnqx_.arc
...
1    1232    OK     0              13              /u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1232_9lk7kkvh_.arc
1    1233    OK     0              1               /u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1233_9lk7kkww_.arc
channel ORA_DISK_1: input backup set: count=114, stamp=842454366, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:35
channel ORA_DISK_1: input backup set: count=115, stamp=842454431, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
channel ORA_DISK_1: input backup set: count=117, stamp=842454556, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:03
Finished validate at 18-MAR-14

Notice that for each existing backup file an exact copy was made.
This was verified by using md5sum to compare the file check sums.

== as shown by md5sum, these are exact duplicates

[oracle@dev ]$ md5sum /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup
21b1c12d47216ce8ac2413e8c7e3fc6e  /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup
7524091d41785c793ff7f3f504b76082  /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup
974bb354db9eb49770991334c891add5  /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup

[oracle@dev ]$ md5sum /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp
21b1c12d47216ce8ac2413e8c7e3fc6e  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp
7524091d41785c793ff7f3f504b76082  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp
974bb354db9eb49770991334c891add5  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp

It then occurred to me that maybe this behavior was for some reason due to creating backups outside the FRA, and Oracle for some reason wanted a copy of each file in the FRA. If so this would probably be a bug, but I thought it interesting enough to run a test.

The following shows that all previous backups were removed, new ones created, as well as space consumed in the FRA.

== Delete all backups, and create backups in FRA only

RMAN list backup;
specification does not match any backup in the repository

RMAN crosscheck backup;
using channel ORA_DISK_1
specification does not match any backup in the repository

====== create new backups in FRA

RMAN backup database;

Starting backup at 18-MAR-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/u01/oradata/JS01/datafile/o1_mf_users_8g69rzg7_.dbf
input datafile file number=00003 name=/u01/oradata/JS01/datafile/o1_mf_undotbs1_8g69rgd1_.dbf
input datafile file number=00002 name=/u01/oradata/JS01/datafile/o1_mf_sysaux_8g69qxt0_.dbf
input datafile file number=00001 name=/u01/oradata/JS01/datafile/o1_mf_system_8g69qb0g_.dbf
input datafile file number=00005 name=/u01/oradata/JS01/datafile/o1_mf_atg_data_8hk7kc7f_.dbf
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp tag=TAG20140318T125302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp tag=TAG20140318T125302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-MAR-14

RMAN backup archivelog all delete input;

Starting backup at 18-MAR-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1212 RECID=581 STAMP=842454820
input archived log thread=1 sequence=1213 RECID=582 STAMP=842454821
...
input archived log thread=1 sequence=1233 RECID=602 STAMP=842531265
input archived log thread=1 sequence=1234 RECID=603 STAMP=842532824
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp tag=TAG20140318T125344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1212_9lgwwng0_.arc RECID=581 STAMP=842454820
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1213_9lgwwnqx_.arc RECID=582 STAMP=842454821
...
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1233_9lk7kkww_.arc RECID=602 STAMP=842531265
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1234_9lk928kg_.arc RECID=603 STAMP=842532824
Finished backup at 18-MAR-14

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp RECID=145 STAMP=842532783
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp RECID=146 STAMP=842532809
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp RECID=147 STAMP=842532824
Crosschecked 3 objects

12:54:40 SYS@js01 AS SYSDBA @fra

FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE                          0                         0               0
REDO LOG                              0                         0               0
ARCHIVED LOG                          0                         0               0
BACKUP PIECE                      35.24                         0               3
IMAGE COPY                            0                         0               0
FLASHBACK LOG                         0                         0               0
FOREIGN ARCHIVED LOG                  0                         0               0

7 rows selected.

Again there are three backup files, this time in the FRA. The files are using 35% of the FRA space.

Let’s run another VALIDATE RECOVERY FILES and find out what happens.


RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any archived log in the repository
specification does not match any datafile copy in the repository
channel ORA_DISK_1: input backup set: count=140, stamp=842532782, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk955rv_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:15
channel ORA_DISK_1: input backup set: count=141, stamp=842532808, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk95nvg_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
channel ORA_DISK_1: input backup set: count=142, stamp=842532824, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk95oxv_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:03
Finished validate at 18-MAR-14

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp RECID=145 STAMP=842532783
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk955rv_.bkp RECID=148 STAMP=842532917
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp RECID=146 STAMP=842532809
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk95nvg_.bkp RECID=149 STAMP=842532932
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp RECID=147 STAMP=842532824
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk95oxv_.bkp RECID=150 STAMP=842532933
Crosschecked 6 objects

12:54:41 SYS@js01 AS SYSDBA @fra

FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE                          0                         0               0
REDO LOG                              0                         0               0
ARCHIVED LOG                          0                         0               0
BACKUP PIECE                      70.47                     35.24               6
IMAGE COPY                            0                         0               0
FLASHBACK LOG                         0                         0               0
FOREIGN ARCHIVED LOG                  0                         0               0

7 rows selected.

That is pretty clear – there are duplicates of each file. This is also shown by the FRA now being 70% consumed by backup pieces, whereas previously on 35% of the FRA was used.

This seems like a bug, and a brief search of My Oracle Support finds this relevant document:

Bug 14248496 RMAN ‘validate recovery files’ creates a piece copy for every execution

This fits the situation pretty well, and the version of this database, 11.2.0.3, is one of the affected versions.
As per the doc this bug is fixed in 11.2.0.4

The next step of course is to try this same operation in 11.2.0.4.
This is also a Linux database running on Linux 6 – the only difference is that the database version is 11.2.0.4.

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-02p3ggdi_1_1.bkup RECID=1 STAMP=842547637
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-03p3gggk_1_1.bkup RECID=2 STAMP=842547732
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-04p3ggjt_1_1.bkup RECID=3 STAMP=842547838
Crosschecked 3 objects

RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any archived log in the repository
specification does not match any datafile copy in the repository
skipping backup sets; RECOVERY FILES, RECOVERY AREA or DB_RECOVERY_FILE_DEST option cannot validate backup set
Finished validate at 18-MAR-14

That wasn’t exactly promising – the VALIDATE RECOVERY FILES now just exits with a message that backup sets cannot be validated with this command.
Apparently ‘fixing’ the bug was just a matter of disabling this bit of functionality.
This is at odds with the Oracle 11g Documentation for RMAN VALIDATE
From the section “RECOVERY FILES”

Validates all recovery files on disk, whether they are stored in the fast recovery area or other locations on disk. Recovery files include full and incremental backup sets, control file autobackups, archived redo log files, and data file copies. Flashback logs are not validated.

The Oracle 12c Documentation for 12c RMAN VALIDATE says the same thing, that is that backup sets are included in the files to be validated.

Clearly the intent seems to have been for this to work with VALIDATE RECOVERY FILES, but for some reason the fix was simply to disable the functionality.

So, what can you use instead?

Now the VALIDATE BACKUPSET command must be used to validate the backups. This is not nearly as convenient as simply issuing the VALIDATE RECOVERY FILES command, as VALIDATE BACKUPSET takes a mandatory argument, which is the primary key of the backup set.

The documentation recommends using the LIST BACKUPSET command, but this is rather inconvenient as the keys must be parsed from report text as seen.

RMAN list backupset;

List of Backup Sets
==================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1       Full    9.36M      DISK        00:00:04     18-MAR-14
        BP Key: 1   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170034
        Piece Name: /u01/app/oracle/rman/rman-db-02p3ggdi_1_1.bkup
  SPFILE Included: Modification time: 18-MAR-14
  SPFILE db_unique_name: ORCL
  Control File Included: Ckp SCN: 1014016      Ckp time: 18-MAR-14

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2       Full    1.07G      DISK        00:01:36     18-MAR-14
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170212
        Piece Name: /u01/app/oracle/rman/rman-db-03p3gggk_1_1.bkup
  List of Datafiles in backup set 2
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/system01.dbf
  2       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/sysaux01.dbf
  3       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/undotbs01.dbf
  4       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/users01.dbf
  5       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/example01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3       Full    9.36M      DISK        00:00:02     18-MAR-14
        BP Key: 3   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170212
        Piece Name: /u01/app/oracle/rman/rman-db-04p3ggjt_1_1.bkup
  SPFILE Included: Modification time: 18-MAR-14
  SPFILE db_unique_name: ORCL
  Control File Included: Ckp SCN: 1014639      Ckp time: 18-MAR-14

This is fine for manually validating just a few files, but is really not a workable solution for programmatically validating backup sets. Fortunately there is a better method – just use the v$backup_set_details view.

  1  select session_key, session_recid, session_stamp, bs_key, recid
  2  from v$backup_set_details
  3* order by session_key
15:58:37 dev.jks.com - jkstill@js01 SQL /

SESSION_KEY SESSION_RECID SESSION_STAMP     BS_KEY	RECID
----------- ------------- ------------- ---------- ----------
	469	      469     842532214        106	  106
	469	      469     842532214        107	  107
	469	      469     842532214        105	  105

3 rows selected.

RMAN> validate backupset 105;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp tag=TAG20140318T125302
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:07
Finished validate at 18-MAR-14

-- the same was done for BS_KEY values 106 and 107

It is usually a good idea to investigate when something is found to work differently than expected.
It was certainly beneficial in this case, as I was developing code on 11.2.0.3 that would later run on 11.2.0.4.
While that bit of code would not work as expected on 11.2.0.4, it would also not cause an error, and probably not be noticed until it caused a recovery problem.

Using VALIDATE BACKUPSET is a workable solution, but not nearly as convenient as using VALIDATE RECOVERY FILES.
Perhaps there will be a fix for it in future releases.

Viewing all 301 articles
Browse latest View live