Redo Log Re-Sizing

The only way to re-size REDO logs is to create new one’s and drop the old ones. This method has been the standard… the only way to re-size redo logs since at least 7.1.4 (that’s when I started working with Oracle. Hopefully that doesn’t age me).

Let’s start with reviewing the current redo logs:
select (a.bytes/1024)/1024, a.status,b.member,b.type,b.group# from v$log a,v$logfile b
where a.group#=b.group#

50 CURRENT /u01/app/oracle/oradata/redo/redo01a.log ONLINE 1
50 CURRENT /u01/app/oracle/oradata/redo/redo01b.log ONLINE 1
50 INACTIVE /u02/app/oracle/oradata/redo/redo02a.log ONLINE 2
50 INACTIVE /u02/app/oracle/oradata/redo/redo02b.log ONLINE 2
50 INACTIVE /u03/app/oracle/oradata/redo/redo03a.log ONLINE 3
50 INACTIVE /u03/app/oracle/oradata/redo/redo03b.log ONLINE 3

To create additional redo log files:

alter database add logfile group 4 ('/u01/app/oracle/oradata/redo/redo4a.log',
'/u01/app/oracle/oradata/redo/redo4b.log') size 1024m;
alter database add logfile group 5 ('/u01/app/oracle/oradata/redo/redo5a.log',
'/u01/app/oracle/oradata/redo/redo5a.log') size 1024m;
alter database add logfile group 6 ('/u01/app/oracle/oradata/redo/redo6a.log',
'/u01/app/oracle/oradata/redo/redo6b.log') size 1024m;

Check which one’s are active:

select (a.bytes/1024)/1024, a.status,b.member,b.type,b.group# from v$log a,v$logfile b
where a.group#=b.group#

50 CURRENT /u01/app/oracle/oradata/redo/redo01a.log ONLINE 1
50 CURRENT /u01/app/oracle/oradata/redo/redo01b.log ONLINE 1
50 INACTIVE /u02/app/oracle/oradata/redo/redo02a.log ONLINE 2
50 INACTIVE /u02/app/oracle/oradata/redo/redo02b.log ONLINE 2
50 INACTIVE /u03/app/oracle/oradata/redo/redo03a.log ONLINE 3
50 INACTIVE /u03/app/oracle/oradata/redo/redo03b.log ONLINE 3

Perform log switches until the new log file is current:

alter system switch logfile ;

Check point to flush the data. This allows the release of the redo logs. If you attempt to delete a redo log that is required for crash recovery it will error:

alter system checkpoint global;

Now drop the logfile groups that contain the files that are being replaced.

alter database drop logfile group 1;

You might be wondering how one determines the redo size to start with? Or maybe how will I know I need to re-size my redo?

Let’s start with the later. How frequently are log switches happening? Just looking at the alert.log gives an indication. But how pervasive are the switches?

Thread 1 advanced to log sequence 4696 (LGWR switch)
Current log# 1 seq# 4696 mem# 0: /u01/app/oracle/oradata/redo01a.log
Current log# 1 seq# 4696 mem# 1: /u01/app/oracle/oradata/redo01b.log
Thu Jan 24 09:11:16 2013
Archived Log entry 4404 added for thread 1 sequence 4695 ID 0x5104d81c dest 1:
Thread 1 cannot allocate new log, sequence 4697
Checkpoint not complete
Current log# 1 seq# 4696 mem# 0: /u01/app/oracle/oradata/redo01a.log
Current log# 1 seq# 4696 mem# 1: /u01/app/oracle/oradata/redo01b.log
Thu Jan 24 09:11:25 2013
Thread 1 advanced to log sequence 4697 (LGWR switch)
Current log# 2 seq# 4697 mem# 0: /u02/app/data/oradata/redo02a.log
Current log# 2 seq# 4697 mem# 1: /u02/app/data/oradata/redo02b.log
Thu Jan 24 09:11:25 2013
Archived Log entry 4405 added for thread 1 sequence 4696 ID 0x5104d81c dest 1:
Thread 1 cannot allocate new log, sequence 4698
Checkpoint not complete
Current log# 2 seq# 4697 mem# 0: /u02/app/data/oradata/redo02a.log
Current log# 2 seq# 4697 mem# 1: /u02/app/data/oradata/redo02b.log
Thread 1 advanced to log sequence 4698 (LGWR switch)
Current log# 3 seq# 4698 mem# 0: /u03/app/data/oradata/redo03a.log
Current log# 3 seq# 4698 mem# 1: /u03/app/data/oradata/redo03b.log
Thu Jan 24 09:11:34 2013
Archived Log entry 4406 added for thread 1 sequence 4697 ID 0x5104d81c dest 1:
Thu Jan 24 09:12:01 2013
Thread 1 cannot allocate new log, sequence 4699
Checkpoint not complete
Current log# 3 seq# 4698 mem# 0: /u03/app/data/oradata/redo03a.log
Current log# 3 seq# 4698 mem# 1: /u03/app/data/oradata/redo03b.log

To actually understand how frequently the log switches are occurring we need to take a look at the history across multiple days.

select inst_id instance,
to_char(first_time,'DD-MON-YYYY') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'999') "23"
from gv$log_history
group by inst_id,to_char(first_time,'DD-MON-YYYY')
order by inst_id, to_char(first_time,'DD-MON-YYYY');

Several factors come into play when determining the size of the Redo files. The amount of redo being generated and the tolerance for data loss. The above query will provide you a grid showing the number of log switches within each hour over the course of several days. You may have heard at one point that your log switches should be approximately 10 minutes apart. The goal is to archive a previous used redo log file prior to the LGWR switching back to that file. Remember redo logs are used in round robin. If the database is in archive log mode, the file will not be written to until it has been archived.

Redo log files should be large enough to ensure that there are no excessive checkpoint which will decrease performance. Batch and data warehouse databases require larger redo, while OLTP will generally have smaller redo. Usually this is somewhere in the neighborhood of 10 – 30 minutes, but keep in mind that every system is different.

You can set the ARCHIVE_LAG_TARGET in the init parameter file to force a log switch at a set interval, just ensure that the redo logs are large enough to handle the redo being generated.

Within 11g we can simply run this query to assist in determining the size.

select optimal_logfile_size from v$instance_recovery

I personally will estimate the amount of redo when building a new database, and then watch it over the course of the first months its in production. Usually I’m able to understand the data and its activity level while testing.

Advertisements

Tagged: ,

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: