I hastily (which will always get me into trouble) created a file through EM12c. It was Saturday before I was headed out for a much needed shopping spree. I figured EM would allow efficiency but I failed to change the diskgroup location. The default diskgroup just happened to not exist in the standby’s DB_FILE_NAME_CONVERT.
The DB_FILE_NAME_CONVERT is one of those magical parameters changing online is forbidden. Oracle chose to create a file in it’s place under the $ORACLE_HOME/dbs directory named UNNAMED000036. Since this was an ASM database that wasn’t going to work, well the file never was created but an entry was made in the controlfile.
The first step was to drop the UNNAMED0000036 file. Since this was a physical standby the I had to use the drop option:
ALTER DATABASE DATAFILE ‘/u01/app/oracle/product/188.8.131.52/dbs/UNNAMED000036’ offline drop;
With the datafile gone I then created a pfile from the spfile:
CREATE PFILE FROM SPFILE;
Modified the DB_FILE_NAME_CONVERT within the spfile. The apply process was stopped but I needed to shutdown the database and start it to mount using the new pfile:
STARTUP NOMOUNT PFILE=’/u01/app/oracle/product/184.108.40.206/dbs/initOracle.ora’;
ALTER DATABASE MOUNT;
Then start the apply process:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
We run with maximum performance, if needed you would restart the real time apply:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE:
I monitored the apply process. Once the apply process caught up I then switched to using the spfile.
Now to revisit the alert thresholds within EM12c so we have heads up on the space issues before the weekend.
We recently performed our first ever failover. I’ll cover the actual steps of the failover including those that bite us because the standbys were built only with the protecting the data but never really using those databases. After I successfully failed the database over to the physical standby I immediately started a level 0 backup. The backup ran with incident until the archivelogs. That’s when I received the dreaded archivelog not found message:
The interesting thing about the message was the archivelog that the backup balked on — it wasn’t a log from the current primary but the previous primary. Actually it was the most recent archivelog that was applied to the now current primary when it was the standby. I decided the first step would be a crosscheck:
rman target / nocatalog
crosscheck archivelog all;
I noticed right away it started with a directory 2009 and slowly scrolled through about 70k of archivelogs including the most recent on from the previous primary. The only archives that were found of course were those from the current primary. No worries it knows the files don’t exist and marked them as such. I started the archivelog backup again and it immediately failed same reason. So this time I decided to run a delete expired and delete obsolete.
delete archivelog expired;
delete archivelog obsolete;
The backup once again scrolled through 70k plus archive logs received the same error message that they were not found. Odd since these archivelogs contained the previous DBID and even showed they were the previous primary when reviewing the v$archived_log view.
I found the following note on My Oracle Support. I first set out to uncatalog one by one each archive. But as I quickly discovered that process would take forever even after I scripted the uncataloged. I was hoping I could perform the uncatalog at the directory level after all it is possible to catalog a directory and all the archivelogs are then registered. That would not be the case. I actually had to uncatalog all the archivelogs following the note. And then recataloged the archives. When I recataloged I did by the directory.
RMAN target / nocatalog — we choose the daring life using the controlfile instead of recovery catalog
catalog start with ‘/u02/app/oracle/archives’;
I restarted the backup and everyone is happy now.