Monthly Archives: February 2013

Logical Standby Monitoring

Logical standbys add value through the offloading of reporting and other activities that can have performance impact on the production database. However, they are a special beast that require additional management. They easily can fall out of step with the primary production database. A simple create user can cause this situation to occur or the creation of an object that is not supported. I like data guard, but I would rather not manage standbys simply because they require extra babysitting.

Here are some scripts that I run periodically in an effort to proactively manage the logical standbys. I currently don’t have the data guard broker running so EM12c doesn’t provide me the alerts that would super beneficial in these situations. I prefer one stop shop to review all the databases status and health as opposed to scripts running through cron.

If nothing else one should have an alert setup to notify the right individuals when there is a lag in the apply at the logical database. EM12c provides for this, provided you are on bundle 2 and have your alerts set up correctly (Side bar I’m currently working on re-mediating this situation. Alerts for this condition worked on the old standby but since the cut over to the new standby they have effectively stopped). It would be nice if the patch went smoothly for EM12c, but that has not been the case so I do need to mitigate the situation with a quick alert running from within cron or dbms_scheduler. My preferred method is dbms_scheduler for all jobs related to the database.

The dba_logstdby_progress has the data required for alerting specifically the applied time and newest_time. I did modify my original query to include the comparison with the NEWEST and the APPLIED_TIME this matched what can be found in the Data Guard Menu utility. This utility is a treasure trove of awesome scripts for data guard. Although not a heavy user preferring to utilize SQL from my laptop over logging into the linux host to perform checks. But that’s just me the menu has helped me when I didn’t have a script readily available.

The job executes a query against the dba_logstdby_progress table checking the NEWEST and APPLIED times against each other, as well as the NEWEST against the current date and time (SYSDATE).

select
ltrim(rtrim(trunc((86400*(newest_time-applied_time))/60)-
60*(trunc(((86400*(newest_time-applied_time))/60)/60)))) apply,
ltrim(rtrim(trunc((86400*(sysdate-newest_time))/60)-
60*(trunc(((86400*(sysdate-newest_time))/60)/60)))) newest
into vApply, vNewest
from dba_logstdby_progress;

I select into variables and then perform the simple comparisons. Here’s how the anonymous block with the calls to the UTL_SMTP look. UTL_MAIL could easily substituted. This can be cleaned up it was a quick attempt to put something into place while I continue to work on EM12c. I’ll continue running this in the background until I’m convinced that I can rely on the alerts.

declare
vApply number;
vNewest number;
vLag number;
vLag_minutes number;
v_mailhost varchar2(200);
v_sender varchar2(200);
v_recipient varchar2(200);
v_subject varchar2(200);
v_message1 varchar2(10000);
v_message2 varchar2(10000);
v_message3 varchar2(10000);

mail_conn utl_smtp.connection;
crlf varchar2( 2 ):=CHR( 13 ) || CHR( 10 ) ;
mesg VARCHAR2(32767);

begin
v_mailhost:=’10.0.0.10′;
v_subject:=’Lag Time For Logical Standby Database’;
v_recipient:=’me@somewhere.com’;
v_sender:=’mea@somewhere.com’;
vlag:=15;

select
ltrim(rtrim(trunc((86400*(newest_time-applied_time))/60)-
60*(trunc(((86400*(newest_time-applied_time))/60)/60)))) apply,
ltrim(rtrim(trunc((86400*(sysdate-newest_time))/60)-
60*(trunc(((86400*(sysdate-newest_time))/60)/60)))) newest
into vApply, vNewest
from dba_logstdby_progress;

If vApply > vLag Then
vLag_minutes := vApply – vLag;
dbms_output.put_line (vLag_minutes);
v_message1:=’Apply Lag Time is: ‘|| vLag_Minutes;
elsif vNewest > vLag Then
vLag_minutes := vNewest – vLag;
dbms_output.put_line(vLag_minutes);
v_message1:=’Lag Time is: ‘|| vlag_minutes;
end if;

mesg:= ‘FROM: <”>’ || crlf ||
‘Subject: ‘||v_subject || crlf ||
‘TO: ‘||v_recipient || crlf ||
‘Logical Standby ACCLSTD LAG is:’ ||crlf||
‘ ‘||crlf||
v_message1 ||crlf||
‘ ‘;

mail_conn := UTL_SMTP.OPEN_CONNECTION(v_mailhost,25);
utl_smtp.helo(mail_conn,v_mailhost);
utl_smtp.mail(mail_conn,v_sender);
utl_smtp.rcpt(mail_conn,v_recipient);
utl_smtp.data(mail_conn,mesg);

utl_smtp.quit(mail_conn);

EXCEPTION
WHEN UTL_SMTP.INVALID_OPERATION THEN
dbms_output.put_line(‘ Invalid Operation in Mail’);
WHEN UTL_SMTP.TRANSIENT_ERROR THEN
dbms_output.put_line(‘ Temporary e-mail issue – try again’);
WHEN UTL_SMTP.PERMANENT_ERROR THEN
dbms_output.put_line(‘ Permanent Error Encountered.’);
WHEN OTHERS THEN
raise_application_error(-20002,’unable to send the mail.’||SQLERRM);
END;

There are other checks that should be performed on the logical standby and the Data Guard Menu is loaded with the scripts to perform those checks. If you are interested in the companion book that details the use of the menu as well as other data guard information check out Oracle Data Guard 11g Handbook Undocumented Best Practices and Real-World Techniques. It would have been nice to have had the book before I delved into the world of data guard, sometimes being thrown into the rough waters is the best teacher.

DBMS_REDEFINITION Failed Now What?

I was cruising along with the dbms_redefinition of a very large table in our QA environment.  The goal was changing the table to a partition table.  Then the unthinkable happened I received a space error.

ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01688: unable to extend table TEST.TABLE1 partition P201212
by 8192 in tablespace USERS
ORA-06512: at “SYS.DBMS_REDEFINITION”, line 52
ORA-06512: at “SYS.DBMS_REDEFINITION”, line 1646
ORA-06512: at line 1

This particular database is on a shared server with several QA databases and we do just squeak by with space.  Unfortunately it’s not on a SAN so I can’t just have space allocated.

In order to recover from this error I must run the ABORT_REDEF_TABLE procedure.

begin dbms_redefinition.abort_redef_table(‘TEST’,’TABLE1′,’TABLE1_INTERIM’);

With the parameters being schema, original table name and interim table name.

This procedure drops all logs and tables that were created during this process, but not the interim table.  If you have decided to not perform this action at all you will need to drop that table separately.

Once this procedure is complete you can continue start the redefinition process over with the START_REDEF_TABLE procedure.

Re-Create Reference Constraints Dynamically

I currently have a project to modify several tables from non-partitioned to partition. A fairly easy, although tedious process that I documented in this blog. However, one of the stumbling points was re-creating dropped referential constraints which is required when using dbms_redefinition since the tables with references to the table being modified will point to the original non-partitioned table with the interim table name. We can’t drop the interim table (which is now the original table) so that we can rename the constraints and indexes appropriately (back to their original names). The problem wasn’t recreating these constraints but finding the most efficient method of per-creating the script. I always perform all my maintenance tasks from scripts that have been tested through all the environments. Of course, the constraints would be lost if they are dropped prior to creating a script to create them. I found a few of different approaches. I didn’t need anything specific or too complex so I turned to the internet which is almost always loaded with scripts ready to be used. I settled on the following approach by Duncan Berriman.

I did modify the script slightly to include the owner of the table. This allows me to use the script as myself for multiple schemas.

Happy redefinition!

Restart EM12c Agent

At times I’m not able to restart the EM12c agent from the console.   Fortunately command line works like a champ.

cd to the home directory of you agent bin directory

cd /u01/app/oracle/agent/agent_inst/bin

./emctl start agent

Oracle Enterprise Manager 12c Cloud Control 12.1.0.1.0
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
Starting agent ........................................ started.

A ps -ef|grep emagent will provide the details of the process once started.

You can then reload the page within the console the agent should now display as up. If you don’t get an all clear with the green up arrow.

1. Click on the target of interest from the main target list page
2. Click on the word Agent with the downward triangle.
3. Displayed is a list of actions that can be performed on the agent. Select Resynchronization.
4. The Agent Resynchronization page will be displayed. Leave the check box by the unblock agent on successful completion checked.
5. Move over to the far right hand side of the page and click Continue.

Once this task is completed the agent should display as up.

Non-Partition To Partition Table

In reviewing the table structures and SQL statements executed within the database over the 24 hour period, it is apparent partition on certain tables that are date driven will be beneficial for performance by allowing for partition elimination. Identify candidates that are good candidates for partitioning is the easy part. Change them is a little more challenging.

There are three methods that I have seen used to change non-partition tables to partitioned.

A. Starting with the most cumbersome method:
1. Create an interim table with all the dependents (constraints, triggers, indexes).
2. Copy the data to the new table. CTAS would work, but you would then need to create the dependents.
3. Drop the original table.
4. Create the new table with the old name including all the dependents with the standard names with the partitioning information.
5. Copy the data back from the interim table to the new partitioned table.
6. Don’t forget statistics!

B. Next is Exchange partition:
1. Create a Interim table (this will be the same interim table that we use later as well for dbms_redefinition. There is one difference.
The partition below has maxvalue which will allow all the contents of the original data to transfer to this partition. We will handle splitting the data into additional partitions at the end.


CREATE TABLE "TEST"."TABLE1"
( "ID" NUMBER NOT NULL ENABLE,
"DETAIL" NUMBER(15,0) NOT NULL ENABLE,
"TEXT" VARCHAR2(4000),
"REC" VARCHAR2(4) NOT NULL ENABLE,
"REC_" VARCHAR2(2000),
"START_DT" DATE NOT NULL ENABLE,
"STATUS" VARCHAR2(20) NOT NULL ENABLE,
"MESSAGE" VARCHAR2(4000),
CONSTRAINT "TABLE1_PK1" PRIMARY KEY ("ID", "DETAIL")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "INDX" ENABLE,
CONSTRAINT "TABLE1_FK1" FOREIGN KEY ("ID")
REFERENCES "TEST"."LOG" ("ID") ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 20 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
PARTITION BY RANGE (START_DT)
(PARTITION P2013 VALUES LESS THAN (MAXVALUE)) )
TABLESPACE "USERS" ENABLE ROW MOVEMENT "

3. Create indexes and constraints that may not have been included in the original script above. Keep in mind this will be unique to your situation.

create index TEST.IDX1 on TEST.TABLE1_PART(start_dt) Local;

create index TEST.IDX2 on TEST.TABLE1_PART(detail) local;

4. Now we want to exchange the contents of the two tables

ALTER TABLE test.table_1
EXCHANGE PARTITION p2013
WITH TABLE test.table
WITHOUT VALIDATION
UPDATE GLOBAL INDEXES;

5. Drop the original table TEST.TABLE and rename all the constraints and indexes. You will need to handle the referential constraints from other tables as well.

6. Now to add the appropriate partitions

ALTER TABLE big_table
SPLIT PARTITION p2013 AT (TO_DATE(‘ 2013-01-01 00:00:00’, ‘SYYYY-MM-DD HH24:MI:SS’, ‘NLS_CALENDAR=GREGORIAN’))
INTO (PARTITION p201301,
PARTITION p2013)
UPDATE GLOBAL INDEXES;

ALTER TABLE big_table
SPLIT PARTITION p2013 AT ((TO_DATE(‘ 2013-02-01 00:00:00’, ‘SYYYY-MM-DD HH24:MI:SS’, ‘NLS_CALENDAR=GREGORIAN’))
INTO (PARTITION p201302,
PARTITION p2013)
UPDATE GLOBAL INDEXES;

continue until you have created all the necessary partitions for your data.

7. Verify the results.

C. Lastly, DBMS_REDEFINITION — this has become my preferred method. You still have to create the interim table, but it is what it is at this point. Wouldn’t it be nice if we could just some how magical make partitions on an existing table with having to create the interim table.

1. Determine partition scheme, to include naming, and partition key.
2. Create an interim table with the partition information included. This can be easily handled by:

select dbms_metadata.get_ddl('TABLE','TABLE NAME') from dual;

The output will be similar to this:


CREATE TABLE "TEST"."TABLE1"
( "ID" NUMBER NOT NULL ENABLE,
"DETAIL" NUMBER(15,0) NOT NULL ENABLE,
"TEXT" VARCHAR2(4000),
"REC" VARCHAR2(4) NOT NULL ENABLE,
"REC_" VARCHAR2(2000),
"START_DT" DATE NOT NULL ENABLE,
"STATUS" VARCHAR2(20) NOT NULL ENABLE,
"MESSAGE" VARCHAR2(4000),
CONSTRAINT "TABLE1_PK" PRIMARY KEY ("ID", "DETAIL")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "INDX" ENABLE,
CONSTRAINT "TABLE1_FK" FOREIGN KEY ("ID")
REFERENCES "TEST"."LOG" ("ID") ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 20 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ENABLE ROW MOVEMENT "

The above create statement needs to be modified to change the names of the created objects since they already exist. I’m not using the new interval partition in 11g simply because of the manner in which we archive and name the archive files. Remember this an interim table.


CREATE TABLE "TEST"."TABLE1"
( "ID" NUMBER NOT NULL ENABLE,
"DETAIL" NUMBER(15,0) NOT NULL ENABLE,
"TEXT" VARCHAR2(4000),
"REC" VARCHAR2(4) NOT NULL ENABLE,
"REC_" VARCHAR2(2000),
"START_DT" DATE NOT NULL ENABLE,
"STATUS" VARCHAR2(20) NOT NULL ENABLE,
"MESSAGE" VARCHAR2(4000),
CONSTRAINT "TABLE1_PK1" PRIMARY KEY ("ID", "DETAIL")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "INDX" ENABLE,
CONSTRAINT "TABLE1_FK1" FOREIGN KEY ("ID")
REFERENCES "TEST"."LOG" ("ID") ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 20 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
PARTITION BY RANGE (START_DT)
(PARTITION "P201301" VALUES LESS THAN (TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION "P201302" VALUES LESS THAN (TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) )
TABLESPACE "USERS" ENABLE ROW MOVEMENT "

3. begin dbms_redefinition.start_redef_table(‘TEST’,’TABLE1′,’TABLE1_PART’); end;
— this begins the redefinition process

4. begin dbms_redefinition.sync_interim_table(‘TEST’,’TABLE1′,’TABLE1_PART’); end;
— this synchronizes the interim and the original table. It may or may not be needed
depending on how long the process takes. I tend to error on the safe side and just
include it.

5. Any indexes on the interim table need to be re-created as well, but with different names.
The PK and FK indexes have already been taken care of when we created the interim table.

create index TEST.IDX1 on TEST.TABLE1_PART(start_dt) Local;

create index TEST.IDX2 on TEST.TABLE1_PART(detail) local;

6. begin dbms_redefinition.finish_redef_table(‘TEST’,’TABLE1,’TABLE1_PART’); end;

The interim table will now have the original table’s name: TABLE1
The original table will now have the interim table’s name: TABLE1_PART

A simple describe on both tables will show the switch of the structures. You can also verify the partitions:
select partitioned on dba_tables where owner = ‘TEST’;
There should be one record returning displaying YES.

select partition_name from dba_tab_partitions where table_name = ‘TABLE1’;
This will list out the partitions that were created for the table.

7.  Drop referencing foreign keys and prepare the re-create script which will point to the renamed constraint in step 9.   This is necessary to drop the interim table.

On a side note for this particular step, although I personally believe that all steps should be scripted out prior to beginning any maintenance, this is especially true for this step.  Once you drop the constraints they are gone.  In my case this would be very troubling if I didn’t have a script ready to re-create the constraints since one of the tables I’m working on has 40 plus tables that reference its primary key constraint.  In a fut

8. Drop the interim table
drop table test.table1_part;

9. Remember that we renamed all the constraints and indexes on the interim table so that they wouldn’t conflict
with the original table. Now that the interim table (the original table) has been dropped along with all their dependencies, we can rename the constraints, and indexes appropriately.

ALTER TABLE TABLE1 RENAME CONSTRAINT TABLE1_PK1 to TABLE1_PK;
ALTER INDEX RENAME CONSTRAINT TABLE1_FK1 to TABLE1_FK1;
ALTER INDEX TABLE1_PK1 RENAME TO TABLE1_PK;
ALTER INDEX TABLE1_FK1 RENAME to TABLE1_FK;
ALTER INDEX IDX1 RENAME to IDX;
ALTER INDEX IDX2 RENAME to IDX1;

9. Verify the changes:
select * from dba_constraints where table_name = 'TABLE1';
select * from dba_indexes where table_name = 'TABLE1';

So we have covered the three different methods I have used to convert a non-partitioned table to a partition table. As I stated, the third option DBMS_REDEFINITION is the method that I use without issue. There are other commands and other situations where DBMS_REDEFINITION would be of use.

8 Best Practices to keep the Release Train moving

Continuous delivery which is a goal of DEVOPS relies on automated testing. Too often CD is attempted with half thought through processes that rely on tools that don’t perform to the level of functionality required. Manoja Nara outlines the steps required for CD notice the importance of testing and not just any testing automated testing.

Manoja Nara's Blog

We have moved far from the traditional way of developing and releasing software. All those few weeks-to-months of developed code that gets tested for a few weeks after which it’s packaged for production is no longer exciting for engineering and release teams. A highly iterative and agiltraine development process has evolved as a norm for companies dealing with products of any size and complexity.

Today, we see Facebook built and deployed in less than half an hour; Flickr having 10+ deploys a day and companies thriving for a daily push. All these web companies share some common continuous delivery principles, which drive their passion for short and frequent releases.

Here are 8 best practices adopted by some of the world’s best release engineering teams. While each of these practices alone can be a book of explanation, I tried to capture some “valuable clips”. Jez Humble and David…

View original post 581 more words