Standby Redo Log Issues After Create of Standby

We ended up recreating our standby’s at the DR site unrelated to any Oracle issue which is always good.  However, when the duplication finished the standby redo logs were not available.

RFS[12]: No standby redo logfiles available for thread 1

Our first step was checking to see if the standby logs existed and whether they were being used by checking the v$standby_log view.  The status showed unassigned and the no SCN was listed.

Then I found the following from John Hallas that explained different in sizes between the primary and secondary can be a cause:

http://jhdba.wordpress.com/2011/02/28/incorrectly-sized-standby-redo-logs/

Comparing the sizes they were different sizes 200mb secondary for 600mb on primary.

Time to recreate the the standby redo logs with the correct size matching the primary.  I pull the standby redo log scripts dynamically on the primary to drop and then recreate the log files.  Again John Hallas does a really good job providing this information.

http://jhdba.wordpress.com/2011/02/28/scripts-to-resize-standby-redolog-files/

 

Advertisements

ORA-16224 Database Guard Is Enabled

Recently encountered this error after a roll out in production.  Several hundred procedures were invalid on the logical standby.  I was unable to recompile and received an ORA-16224 Database Guard is enabled error.

I checked the guard_status in v$database.  Usually this error is thrown when it is set to ALL, however, in this case it was set to Standby.  I was interested in disabling the guard which could be done with:

alter database guard none;   < other options include standby, which it was already set to and all, which wouldn’t have worked>

However, this can also be set at the session level:

alter session disable guard;

That turns it off for my session.  I wiped up a quick anonymous block to recompile all invalid procedures.

declare

sSQL varchar2(4000);

begin

for i in (select object_name from

dba_objects where owner = ‘MY_SCHEMA’

and status = ‘INVALID’

and object_type = ‘PROCEDURE’)

loop

sSQL:=’alter procedure my_schema.’||i.object_name||’ compile’;

execute immediate sSQL;

end loop;

end;

/

Once that executed successfully and just turned the guard status back to standby in my session:

alter session enable guard;

Had the end user attempt to run report again.  Success.

Instantiate A Table On Logical Standby

Seems today is the day for logical standby issues.  Two tables received a conflict, which we have suffered with previously and applied a patch.  It’s interesting we started receiving this again, however, I’m going to skip to how I resolved the immediate issue syncing the logical objects and allowing the SQL Apply to continue.  Will need a MOS for the other issue.

I started with trying to use DBMS_LOGSTDBY.INSTANTIATE_TABLE, which is nice and convenient since it uses a dblink — no moving files around.

EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE(schema_name=>’MySchema’, table_name=>’TABLE1′, dblink=>’mydb’);

An error is thrown issue with a reference constraint.  Problem the parent table is out of sync as well.

EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE(schema_name=>’MySchema’, table_name=>’PARENT1′,dblink=>’mydb’);

An error is thrown due to the PARENT1 having a LONG datatype column.

So now I’m forced to move files around using datapump.  I already have datapump directories defined on both databases so I won’t cover that task.

These tables are not updated frequently usually only throw a weekly file load.

alter database stop logical standby apply;

expdp system dumpfile=logical.dmp directory=data_pump_dir tables=myschema.parent1, myschema.table1

Once completed I move the dumpfile over to the logical database server under the appropriate directory.

impdp system dumpfile=logical.dmp directory=data_pump_dir tables=myschema.parent1, myschema.table1

alter database start logical standby apply immediate;

Then I watched the SQL Apply process close the lag.

Instance Using SPFILE But File Doesn’t Exist

Recently I’ve been playing Tetris with my databases.  Duplicating databases and reconciling differences due to a change management system with significant gaps.  During one of these fun moments I ended up with a database using a SPFILE, but the SPFILE didn’t actually exist.  Simple fix:

create pfile from memory;

All was well as it saved me from having to re-create the pfile using the alert.log.  The SPFILE can be created this way as well provided you are using a pfile:

create spfile from memory;

Every minute I can save with little tidbits such as the above is precious.

Relinking Grid Infrastucture Binaries

I recently ran into several issues while installing the 11.2.0.2 GI.  Most of these issues were due to the admins designing the system that caused several installs and uninstalls.  On the final install everything appeared to be in order until I tried to start ASMCA to add diskgroups.  I received the error message an earlier version of ASM was running and in order to upgrade I needed to start ASMCA for the earlier version.  I only ever had one version installed so this was a faulty error message.

After reviewing the system I decided to relink the GI binaries:

As ROOT:

cd $GID_HOME/crs/install

perl rootcrs.pl -unlock — this stops the clusterware and sets the permissions for root

As the Grid Infrastructure Home

$GRID_HOME/bin/relink

as ROOT:

cd GRID_HOME/rdbms/install

./rootadd_rdbms.sh

cd GRID_HOME/crs/install

perl rootcrs.pl -patch  — Restarts the Clusterware and sets permissions

There is a bug that you may encountered with rootcrs.pl.  I overcame the issue by running root.sh again from the Grid home as the root user.

Once completed ASMCA started without issue.

Agent Unable to Communicate with OMS EM12c

My phone started going crazy with alerts from EM12c, apparently all of my agents had lost communication with OMS.  I immediately started checking all of the logs on OMS, not finding anything of interest I decided to try an upload from one of the host:

./emctl upload agent

WARN – Ping communication error
o.s.emSDK.agent.comm.exception.ConnectException [Failure connecting to https://HOST.domain.com:4889/empbs/upload , err host.domain.com]

./emctl pingOMS

Can’t find host host.domain.com

Interesting, but makes sense after all the only agent able to communicate to the OMS actually resides on the same host.

ping host.domain.com

Same results

After an email exchange with the Linux administration some changes had occurred in DNS that needed to be reverted.  Once that had been completed the ping worked and all agents showed as up.

 

 

ORA-31634 Unable to Construct Unique Job Name by Default

It never fails, every time I have a quick request that I should be able to hammer out in a few minutes something hampers it.  Today is just one of those days.  I needed to perform an expdp/impdp on a small schema between development and testing and I received an error:

ORA-31634 Unable To Construct Unique Job Name By Default

I generally always default the job name on data pumps exports especially if they are one offs.  However, today this just didn’t cooperate.  Apparently we have experienced some failures with other exports running and the tables still exist.  There’s a limit of 99.  An easy check:

select owner_name,state,job_name from dba_datapump_jobs;

If the number of records returned equal 99 then simple delete the tables.  SYS_EXPORT_SCHEMASxx;  — replacing the xx with the number displayed in the output from above.

Restart the export.