Bookmark and Share

Saturday, February 21, 2009

The Tiniest Streams N-Way Replication Configuration

If you want to maximize your chances to mess up everything, a Streams N-Way replication configuration is probably one of the best architecture you can build... Imagine you have several copies of the same table in different places; every change made to any of the copy is pushed and applied to all the other copies. Now just think about it:

  • How do you instantiate a new copy of the table without stopping everything?
  • What happens when 1 destination becomes unreachable for 1 hour? 1 day? 1 week?
  • What happens when you change the same row on 2 different table copies at about the same time?
  • What if you replicate DDL?

3-Way table replication

This post doesn't answer any these questions. I promise I'll dig into a few of them later; instead it presents the tiniest 3-Way Streams replication configuration you can build: 3 copies of the same table in one database. The goal here is to have something to discuss further for later considerations...

So lets get into more details. In the following sections, I'll build and maintain 3 copies of the same table named T1 located in the FRANCE, USA and JAPAN schemes. In order to do it, I'll go through the steps below:

Note:
We've tested this post on a 11.1.0.7 Enterprise Edition Oracle database running on Linux 32bits.

Step 1: Create the 3 schemes

The very limit of this one database 3-way replication configuration consists in the fact that one table can be instantiated only once (see DBA_APPLY_INSTANTIATED_OBJECTS), when we actually instantiate each one of them 2 times; one per copy. As a result, even though we would be able to create the configuration with a limited downtime on the primary table with a 3 database system, we need to instantiate the 3 tables with the same SCN in this case. For these reason, we will create the 3 copies of the table at the beginning of this scenario and we will allow users to modify them only after we have instantiated them.

The script below creates the 3 tables and add 2 rows in each one of them:
connect / as sysdba

create user france
identified by france
default tablespace users
temporary tablespace temp;

grant connect,resource to france;

create user usa
identified by usa
default tablespace users
temporary tablespace temp;

grant connect,resource to usa;

create user japan
identified by japan
default tablespace users
temporary tablespace temp;

grant connect,resource to japan;

create table france.t1(
id number primary key,
text varchar2(80));

create table usa.t1(
id number primary key,
text varchar2(80));
create table japan.t1(
id number primary key,
text varchar2(80));
insert into france.t1(id, text)
values (1,'Text 1');

insert into france.t1(id, text)
values (2,'Text 2');

insert into usa.t1
(select * from france.t1);

insert into japan.t1
(select * from france.t1);

commit;

Step 2: Prepare the multi-version data dictionary and the tables

I've already talked about the purpose of this step on order to allow the capture processes to build the LCR from the redo logs even the objects are created and dropped and to make sure there is no pending changes when the capture starts. Before we create the capture, we need to capture a copy of the data dictionary and start it in the redo logs with DBMS_CAPTURE_ADM.BUILD:
var first_scn number;

set serveroutput on

DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
/

First SCN Value = 2260357
And then prepare the instantiation of the 3 tables. In this later case, we did not setup supplemental logging the database level but we've added it just for the 3 tables that have primary keys; the script below prepare the objects to be instantiated:
exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'france.t1', supplemental_logging=>'keys');

exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'usa.t1', supplemental_logging=>'keys');

exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'japan.t1', supplemental_logging=>'keys');
Step 3: Create the Streams administrator and the Streams queue

Like for any other Oracle Streams configuration, we need to create a Streams Administrator and in that case one queue to be used as a link between the capture and apply processes:
connect / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE '/u01/app/oracle/oradata/BLACK/streams_tbs.dbf' size 25M
AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

Step 4: Create the capture process and define the capture rules

By tiniest 3-way replication configuration, I mean, I want to create as few components as possible. In this case, we have one only database, so we can create one capture only to for the 3 tables. That what we do below:
  • We create the same capture for the 3 tables
  • We add one inclusion rule per table to replicate the DML only
  • We don't capture the changes that have a TAG because we don't want to replicate a change that is the result of an apply process
The script below create the capture and add the associated rules:
connect strmadmin/strmadmin

accept first_scn prompt "Enter the First SCN of the Capture: "
Enter the First SCN of the Capture: 2260357

var first_scn number;
exec :first_scn:=&&first_scn

BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'strmadmin.streams_queue',
capture_name => 'streams_capture',
rule_set_name => NULL,
source_database => 'BLACK',
use_database_link => false,
first_scn => :first_scn,
logfile_assignment => 'implicit');
END;
/

col capture_name format a15
col queue_name format a13
col first_scn format 999999999999
col start_scn format 999999999999
col rule_set_name format a11

select capture_name,
queue_name,
first_scn,
start_scn,
rule_set_name
from dba_capture;

CAPTURE_NAME QUEUE_NAME FIRST_SCN START_SCN RULE_SET_N
--------------- ------------- ------------- ------------- ----------
STREAMS_CAPTURE STREAMS_QUEUE 2260357 2260357

set serveroutput on
DECLARE
type tn is varray(3) of varchar2(100);
ln tn:=tn('france.t1','usa.t1','japan.t1');
BEGIN
for i in ln.first..ln.last loop
dbms_output.put_line('Source is: '||ln(i));
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ln(i),
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
end loop;
END;
/

Source is: france.t1
Source is: usa.t1
Source is: japan.t1

set lines 120
col streams_name format a16
col streams_type format a9
col table_owner format a10
col table_name format a15
col rule_type format a8
col rule_name format a15

select STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES;

STREAMS_NAME STREAMS_T TABLE_OWNE TABLE_NAME RULE_TYP RULE_NAME
---------------- --------- ---------- --------------- -------- ---------
STREAMS_CAPTURE CAPTURE USA T1 DML T118
STREAMS_CAPTURE CAPTURE FRANCE T1 DML T117
STREAMS_CAPTURE CAPTURE JAPAN T1 DML T119

Step 5: Create the apply components

Opposite to the capture, we must create several apply components. That's because each change, even if it is capture only once can be applied several times. In the script below, we create one apply per target schema and we add the rules to that apply so that it applies the changes captured on the other tables:
connect strmadmin/strmadmin

set serveroutput on

DECLARE
type t_n is varray(3) of varchar2(100);
l_n t_n:=t_n('france','usa','japan');
BEGIN
for i in l_n.first..l_n.last loop
for j in l_n.first..l_n.last loop
if (l_n(i)!=l_n(j)) then
dbms_output.put_line('APPLY '||l_n(i)||'_apply'||
' applies '||l_n(j)||'.t1');
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => l_n(j)||'.t1',
streams_type => 'apply',
streams_name => l_n(i)||'_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
end if;
end loop;
end loop;
END;
/

APPLY france_apply applies usa.t1
APPLY france_apply applies japan.t1
APPLY usa_apply applies france.t1
APPLY usa_apply applies japan.t1
APPLY japan_apply applies france.t1
APPLY japan_apply applies usa.t1


col apply_name format a13
col queue_name format a13
col rule_set_name format a11
col tag format a4

select apply_name,
queue_name,
rule_set_name,
status,
message_delivery_mode,
apply_tag tag
from dba_apply;

APPLY_NAME QUEUE_NAME RULE_SET_NA STATUS MESSAGE_DE TAG
------------- ------------- ----------- -------- ---------- ----
JAPAN_APPLY STREAMS_QUEUE RULESET$_27 ENABLED CAPTURED 00
USA_APPLY STREAMS_QUEUE RULESET$_24 ENABLED CAPTURED 00
FRANCE_APPLY STREAMS_QUEUE RULESET$_21 ENABLED CAPTURED 00

Step 6: Modify the schema of the captured change

If the database is the same for the 3 tables, the schema name differ. As a result, a change to the USA.T1 table must be applied to FRANCE.T1 and JAPAN.T1. The script below add some transformation rules to change the schema name of each one of the LCR; it does that transformation on the apply side:
connect strmadmin/strmadmin

select rule_owner
, rule_name
, streams_type
, streams_name
, table_name
, table_owner
from dba_streams_table_rules
where streams_type='APPLY';

set serveroutput on

DECLARE
type t_n is varray(3) of varchar2(100);
l_n t_n:=t_n('france','usa','japan');
v_rulename varchar2(30);
BEGIN
for i in l_n.first..l_n.last loop
for j in l_n.first..l_n.last loop
if (l_n(i)!=l_n(j)) then
dbms_output.put_line('APPLY: '||
l_n(i)||'_apply for CAPTURE OF '||
l_n(j)||'.t1');
select rule_name
into v_rulename
from dba_streams_table_rules
where streams_type='APPLY'
and streams_name=upper(l_n(i))||'_APPLY'
and table_name='T1'
and table_owner=upper(l_n(j));
dbms_output.put_line(rpad(' ',10)||'=>'||v_rulename);
dbms_streams_adm.rename_schema(
rule_name => v_rulename,
from_schema_name => l_n(j),
to_schema_name => l_n(i),
step_number => 0,
operation => 'add');
end if;
end loop;
end loop;
END;
/
col rule_name format A6
col from_schema_name format a6
col to_schema_name format a12

select rule_name,
transform_type,
from_schema_name,
to_schema_name,
declarative_type
from dba_streams_transformations;

RULE_N TRANSFORM_TYPE FROM_S TO_SCHEMA_NA DECLARATIVE_T
------ -------------------------- ------ ------------ -------------
T120 DECLARATIVE TRANSFORMATION USA FRANCE RENAME SCHEMA
T123 DECLARATIVE TRANSFORMATION FRANCE USA RENAME SCHEMA
T122 DECLARATIVE TRANSFORMATION JAPAN FRANCE RENAME SCHEMA
T125 DECLARATIVE TRANSFORMATION JAPAN USA RENAME SCHEMA
T126 DECLARATIVE TRANSFORMATION FRANCE JAPAN RENAME SCHEMA
T128 DECLARATIVE TRANSFORMATION USA JAPAN RENAME SCHEMA
Step 7: Instantiate the tables

Actually, because of what I said in the first step of this post, the tables are already instantiated. However, we need to set provide a SCN to the apply component that is higher than the SCN of the prepare instantiation so that the process can know which changes to apply; we'll just get the current SCN of the database:
connect / as sysdba

col apply_scn format 999999999999

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
---------
2269379

accept instantiation_scn prompt "Enter the Instantiation: "
Enter the Instantiation: 2269379

And use it with the dbms_apply_adm.set_table_instantiation_scn procedure for the 3 "source" tables:
DECLARE
type t_n is varray(3) of varchar2(100);
l_n t_n:=t_n('france','usa','japan');
BEGIN
for i in l_n.first..l_n.last loop
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => l_n(i)||'.t1',
source_database_name => 'BLACK',
instantiation_scn => &&instantiation_scn);
end loop;
end;
/

col SOURCE_DATABASE format a6
col OBJECT format a10
col INSTANTIATION_SCN format 999999999999

SOURCE OBJECT INSTANTIATION_SCN
------ ---------- -----------------
BLACK FRANCE.T1 2269379
BLACK USA.T1 2269379
BLACK JAPAN.T1 2269379

Step 8: Start the capture and apply processes

We are all set, we can let users access the tables and we can start the different processes :
exec dbms_capture_adm.start_capture('streams_capture');
exec dbms_apply_adm.start_apply('usa_apply');
exec dbms_apply_adm.start_apply('france_apply');
exec dbms_apply_adm.start_apply('japan_apply');

Step 9: Test the replication

That's probably the most enjoyable moment... you can make sure that the replication is working as it should:
insert into france.t1(id, text)
values (3,'Text 3');

commit;

pause

col id format 99
col text format a6

select id,
text
from usa.t1;

ID TEXT
--- ------
1 Text 1
2 Text 2
3 Text 3

select id,
text
from japan.t1;

ID TEXT
--- ------
1 Text 1
2 Text 2
3 Text 3

insert into usa.t1(id, text)
values (4,'Text 4');
insert into japan.t1(id, text)
values (5,'Text 5');
commit;

select id,
text
from france.t1
order by 1;

ID TEXT
-- ------
1 Text 1
2 Text 2
3 Text 3
4 Text 4
5 Text 5
And like me, you can probably not resist to see what happens when what should be avoided happens in a N-Way Replication like this one; We'll create a uniqueness conflit:
insert into usa.t1(id, text)
values (6,'Text 6 Usa');
insert into japan.t1(id, text)
values (6,'Text 6 Japan');
commit;

select id,
text
from france.t1
order by 1;

ID TEXT
-- ------
1 Text 1
2 Text 2
3 Text 3
4 Text 4
5 Text 5

col id format 99
col text format a12

select id,
text
from usa.t1
order by 1;
ID TEXT
--- ----------
1 Text 1
2 Text 2
3 Text 3
4 Text 4
5 Text 5
6 Text 6 Usa

select id,
text
from japan.t1
order by 1;

ID TEXT
--- ------------
1 Text 1
2 Text 2
3 Text 3
4 Text 4
5 Text 5
6 Text 6 Japan

col error_message format a60
select APPLY_NAME, ERROR_MESSAGE
from dba_apply;

APPLY_NAME ERROR_MESSAGE
------------- --------------------------------------------------
JAPAN_APPLY ORA-26714: User error encountered while applying
USA_APPLY ORA-26714: User error encountered while applying
FRANCE_APPLY ORA-26714: User error encountered while applying

select APPLY_NAME, ERROR_MESSAGE
from dba_apply_error;

APPLY_NAME ERROR_MESSAGE
------------- ------------------------------------------------------------
FRANCE_APPLY ORA-00001: unique constraint (FRANCE.SYS_C004474) violated
USA_APPLY ORA-00001: unique constraint (USA.SYS_C004475) violated
JAPAN_APPLY ORA-00001: unique constraint (JAPAN.SYS_C004476) violated
Note:
On 3 databases the result would be different on the FRANCE.T1.Even if the change had been made from the same 2PC transaction it would have been considered different.
Step 10: Stop and suppress the Oracle Streams configuration

This last step cleans up the configuration; this way, you can restart it as many times as you want:
connect / as sysdba

exec dbms_capture_adm.stop_capture('streams_capture');
exec dbms_apply_adm.stop_apply('usa_apply');
exec dbms_apply_adm.stop_apply('france_apply');
exec dbms_apply_adm.stop_apply('japan_apply');

exec dbms_apply_adm.delete_all_errors('usa_apply')
exec dbms_apply_adm.delete_all_errors('france_apply')
exec dbms_apply_adm.delete_all_errors('japan_apply')

exec dbms_capture_adm.drop_capture('streams_capture');
exec dbms_apply_adm.drop_apply('usa_apply');
exec dbms_apply_adm.drop_apply('france_apply');
exec dbms_apply_adm.drop_apply('japan_apply');
DECLARE
type t_n is varray(3) of varchar2(100);
l_n t_n:=t_n('france','usa','japan');
BEGIN
for i in l_n.first..l_n.last loop
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => l_n(i)||'.t1',
source_database_name => 'BLACK',
instantiation_scn => null);
end loop;
end;
/
exec dbms_streams_adm.remove_queue('strmadmin.streams_queue',true,true);

drop user strmadmin cascade;

drop tablespace streams_tbs
including contents and datafiles;

drop user france cascade;
drop user usa cascade;
drop user japan cascade;

Read more...

Tuesday, February 17, 2009

How Does the Streams Capture Follow Up The Applied Changes?

Didn't you ever wonder how an asynchronous Streams capture gets its feedback from the apply processes? For weeks it did not make any sense to me! That was obvious the capture got some kind of feedback from the apply, though. The fact the only mandatory database links was going from the capture to the apply had turned that question into an even bigger mystery to me... This post shows there is actually no magic or hidden features involve. That's pretty simple.

I'll let you draw the full map of what happens when you have messages propagated into several databases and involving multiple applies. I'll focus on a very simple case that takes not more than a few minutes to build and is available in one of my previous post entitled: "Oracle Streams One Way Table Replication 101". This post is also a good pretext to introduce some of the streams processes as well as several ways to trace the associated operations...
Note:
What follows, has been tested with a 11.1.0.7 on Linux x86 32-bit database and that's very likely some of the described behaviors depend on the Oracle Release.
This post is made of the following sections:

Why do I say the capture knows about the applied changes?

First, it has to know! That's because, in the case of asynchronous capture, the messages are enqueued the buffered part of the queue which is not persistent. For the capture process to free the archived logs and provide a reliable mecanism in the case of a failure, it has to monitor the changes that have been applied on the destination. This way, if one instance crashes and logical change records in buffered queues are lost, the capture can restart from the point it knows the messages have been applied for sure.

Second, you can find some data regarding the apply on the capture part of your streams configuration. The APPLIED_SCN column of the DBA_CAPTURE view is incremented on a regular manner when changes are applied. This simple test, once you have the streams configuration running shows those changes:
select applied_scn
from dba_capture;

APPLIED_SCN
-----------
1010398

insert into source.t1
values (4,'Text 4');

col id format 99
col text format a6

select *
from destination.t1;

ID TEXT
-- ------
3 Text 3
4 Text 4
1 Text 1
2 Text 2

select applied_scn
from dba_capture;

APPLIED_SCN
-----------
1010398
So what we discover from that first test is that the last applied SCN is not propagated right away on the capture side of the database; actually if there is no change it can take a while. However if you wait and run the query again, you'll see changes are known from the capture side:
select applied_scn
from dba_capture;

APPLIED_SCN
-----------
1026804

Where is the applied SCN stored?

The next legitimate question is to figure out what is the underlying structure that stores the applied scn. To answer that question, we'll just dig into the Streams dictionary:
col text format a80
set long 10000
set longchunksize 10000

select text
from dba_views
where owner='SYS'
and view_name='DBA_CAPTURE';

TEXT
----------------------------------------------------------------------------------------------------
select cp.capture_name, cp.queue_name, cp.queue_owner, cp.ruleset_name,
cp.ruleset_owner, u.name, cp.start_scn,
decode(cp.status, 1, 'DISABLED',
2, 'ENABLED',
4, 'ABORTED', 'UNKNOWN'),
cp.spare1, cp.spare2,
decode(cp.use_dblink, 1, 'YES', 'NO'),
cp.first_scn, cp.source_dbname, dl.source_dbid, dl.source_resetlogs_scn,
dl.source_resetlogs_time, cp.logmnr_sid, cp.negative_ruleset_name,
cp.negative_ruleset_owner,
-- MAX_CHECKPOINT_SCN, -1 for ckpt free
decode(bitand(cp.flags, 8192), 8192, -1, nvl(dl.checkpoint_scn, 0)),
-- use applied_scn(spare2) for ckpt free
decode(bitand(cp.flags, 8192), 8192, nvl(cp.spare2,0),
dbms_logrep_util.get_req_ckpt_scn(dl.id, nvl(cp.spare2,0))),
decode(bitand(cp.flags, 4), 4, 'IMPLICIT', 'EXPLICIT'),
cp.status_change_time, cp.error_number,
cp.error_message, cp.version,
decode(bitand(cp.flags, 64), 64, 'DOWNSTREAM', 'LOCAL'),
dbms_logrep_util.get_last_enq_scn(cp.capture_name), cp.spare3
from "_DBA_CAPTURE" cp,
dba_logmnr_session dl,
sys.user$ u
where dl.id (+) = cp.logmnr_sid
and cp.capture_userid = u.user# (+)
and (bitand(cp.flags,512) != 512) -- skip sync capture
From that view definition you can see the data come from _DBA_CAPTURE.SPARE2 which itself comes from streams$_capture_process.SPARE2 :
col text format a80
set long 10000
set longchunksize 10000

select text
from dba_views
where owner='SYS'
and view_name='_DBA_CAPTURE';

TEXT
----------------------------------------------------------------------------------------------------
select
queue_oid, queue_owner, queue_name, capture#, capture_name,
status, ruleset_owner, ruleset_name, logmnr_sid, predumpscn,
dumpseqbeg, dumpseqend, postdumpscn, flags, start_scn, capture_userid,
spare1, spare2, spare3, use_dblink, first_scn, source_dbname,
spare4, spare5, spare6, spare7, negative_ruleset_owner,
negative_ruleset_name, start_date, end_date, status_change_time,
error_number, error_message, version
from sys.streams$_capture_process

What process changes these data?

We'll use the fact we've setup supplemental logging for the whole database to track the process that changes streams$_capture_process. In order to proceed, we'll perform the same test we did in the first part of this post but we'll archive the redo log before and after the test and we'll use LogMiner to get the information.
delete from source.t1;

commit;

alter system archive log current;

select thread#, max(sequence#)
from v$archived_log
group by thread#;

THREAD# MAX(SEQUENCE#)
------- --------------
1 17

select applied_scn
from dba_capture;

APPLIED_SCN
-----------
1081340

begin
for i in 1..100000 loop
insert into source.t1
values (i,'Text '||to_char(i));
commit;
end loop;
end;
/

select count(*)
from destination.t1;

COUNT(*)
--------
47895

/
COUNT(*)
--------
81010

/
COUNT(*)
--------
100000

select applied_scn
from dba_capture;

APPLIED_SCN
-----------
1081340
Once the change made, we'll wait for a few minutes so that the applied SCN is changed on the capture side. Once we are sure that's the case, we'll archive the redo log again and get the file names that contain the transactions:
select applied_scn
from dba_capture;

APPLIED_SCN
-----------
1597521

alter system archive log current;

select thread#, max(sequence#)
from v$archived_log
group by thread#;

THREAD# MAX(SEQUENCE#)
------- --------------
1 25

select name from v$archived_log
where thread#=1
and sequence# between 18 and 25;

NAME
------------------------------------------------------------
/u01/app/oracle/oradata/BLACK/archivelogs/1_18_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_19_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_20_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_21_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_22_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_23_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_24_679061156.dbf
/u01/app/oracle/oradata/BLACK/archivelogs/1_25_679061156.dbf

Once we know the data have been changed, we can query the archive logs for the change on streams$_capture_process:
begin
dbms_logmnr.add_logfile(
logfilename => '/u01/app/oracle/oradata/BLACK/archivelogs/1_18_679061156.dbf',
options => dbms_logmnr.new);
for i in 19..25 loop
dbms_logmnr.add_logfile(
logfilename => '/u01/app/oracle/oradata/BLACK/archivelogs/1_'||to_char(i)||'_679061156.dbf',
options => dbms_logmnr.addfile);
end loop;
end;
/

exec dbms_logmnr.start_logmnr( options => -
dbms_logmnr.dict_from_online_catalog);

select operation, session#, serial#, count(*)
from v$logmnr_contents
where seg_owner='SYS' and seg_name='STREAMS$_CAPTURE_PROCESS'
group by operation, session#, serial#;

OPERATION SESSION# SERIAL# COUNT(*)
--------- -------- ------- --------
UPDATE 131 1 2
The process that does the change is the capture process as you can see it from below; to capture more changes, we'll trace that process with dbms_monitor:
select sid, serial#   
from v$streams_capture;

SID SERIAL#
--- -------
131 1

exec dbms_monitor.session_trace_enable(131,1,true,true,'all_executions')

select tracefile
from v$session s, v$process p
where s.sid=131
and p.addr=s.paddr;

TRACEFILE
----------------------------------------------------------------
/u01/app/oracle/diag/rdbms/black/BLACK/trace/BLACK_cp01_8702.trc

How does the capture process get data from the apply?

Once we've setup the traces, we can just wait for the update on streams$_capture_process to be executed and check the content of the trace file. You'll discover from those traces that the capture process queries streams$_apply_milestone that reside on the apply side of the configuration. If we restart a logminer session on the apply side this time and look for the session that update the streams$_apply_milestone table, you'll find that, it's the apply processes doing it. That table is actually updated by the apply reader and server processes:
exec dbms_monitor.session_trace_disable(131,1)

[...]

select operation, session#, serial#, count(*)
from v$logmnr_contents
where seg_owner='SYS' and seg_name='STREAMS$_APPLY_MILESTONE'
group by operation, session#, serial#;

OPERATION SESSION# SERIAL# COUNT(*)
--------- -------- ------- --------
UPDATE 128 1 4
UPDATE 126 1 109

select sid, serial# from v$streams_apply_server;

SID SERIAL#
---------- -------
126 1

select sid, serial# from v$streams_apply_reader;

SID SERIAL#
---------- -------
128 1

Even more interesting findings/questions...

Now that we've discovered how the capture keeps track of the apply changes by querying over the database link, we can use the same strategy to investigate other Streams internals in more advanced configurations. That leads to very interesting findings to; I'll show you one right away! If you check for the process applying the changes, you'll find the Apply Server Process; Restart a logMiner session on the apply side:
[...]

select operation, session#, serial#, count(*)
from v$logmnr_contents
where seg_owner='DESTINATION' and seg_name='T1'
group by operation, session#, serial#;

OPERATION SESSION# SERIAL# COUNT(*)
--------- -------- ------- --------
INSERT 126 1 100000
And if you trace it, you'll be able to catch the change made to streams$_apply_milestone, but nothing regarding the destination database:
exec dbms_monitor.session_trace_enable(126,1,true,true,'all_executions')

delete from source.t1;

commit;

select * from destination.t1;

no rows selected

select tracefile
from v$session s, v$process p
where s.sid=126
and p.addr=s.paddr;

TRACEFILE
----------------------------------------------------------------
/u01/app/oracle/diag/rdbms/black/BLACK/trace/BLACK_as02_8708.trc

Actually if you want to track the change made to the table by the process you'll have to set the trace_level parameter like below:
exec dbms_apply_adm.set_parameter('streams_apply', 'trace_level',255);

begin
for i in 1..100000 loop
insert into source.t1
values (i,'Text '||to_char(i));
commit;
end loop;
end;
/
If you edit the file you'll find the change records (LCR) that that are applied to the destination table but nothing like the SQL statements... Neither in V$SQL. or V$active_session_history. I guess that post raise more questions than it answers!

Read more...

Saturday, February 7, 2009

What About Changes Uncommitted when A Streams Capture Is Instantiated?

That may sound stupid but imagine now the following scenario: "You have uncommitted changes pending for a while before you started a Streams Capture on one of the table involved by the changes. We could even suppose the redo log containing the changes has been archived and the archive log deleted! How does Streams capture handle that case?".

The answer is pretty simple though: "That is not possible!". That's one of the purposes of the dbms_capture_adm.prepare_xxxx_instantiation procedures. They lock the tables to be captured with a Shared (S) TM enqueue which ensure there is no uncommitted changes in the table. To be more precise, they actually wait for those changes to be committed.

Note that a Shared TM enqueue doesn't prevent a transaction involving the table to be pending at the time the capture begins. But it insures no DML has been applied to the table. The reason is Shared (S or 4) Enqueues and Row-X Enqueues (RX or 3) are not compatible as you can see in the summary Summary of Table locks in the Concept Guide

In the following examples, I'll show some of the impact

Prepare a table instantiation when it has pending changes

Obviously the first example consists in checking what happens when you want to prepare the instantiation of a table that has pending changes. To do it, you can open 2 sessions and run the following test case:

  • In the first session, create a table, insert data in it and leave that session opened without commiting the transaction:
create table T1 (col1 number);

insert into T1 values (1);
  • In the second session, run dbms_capture_adm.prepare_table_instantiation
exec dbms_capture_adm.prepare_table_instantiation('T1','none');
  • The second session waits. You can check the enqueues from the GV$LOCK fixed view like below. You'll see the session preparing the instantiation is waiting for the TM enqueue hold by the first session to be released:
select sid,
type,
lmode,
request,
ctime,
block
from gv$lock
where id1 in (
select object_id
from user_objects
where object_name='T1');

SID TY LMODE REQUEST CTIME BLOCK
--- -- ----- ------- ----- -----
125 TM 0 4 469 0
170 TM 3 0 480 1
  • You can roll back (or commit) the changes in the first session, dbms_capture_adm.prepare_table_instantiation will finish.
rollback;
  • We can now drop the table and go to the next section
drop table t1 purge;

Prepare a table instantiation when it has a pending transaction but no change

In this second example, I'll show that a pending change is different from a pending transaction involving the table; open 2 sessions and run the following test case:

  • In the first session, create a table, lock the table in shared mode and check there is a TM enqueue for the table; leave that session opened without commiting the transaction:
create table T1 (col1 number);

lock table T1 in share mode;

select sid,
type,
lmode,
request,
ctime,
block
from gv$lock
where id1 in (
select object_id
from user_objects
where object_name='T1');

SID TY LMODE REQUEST CTIME BLOCK
--- -- ----- ------- ----- -----
170 TM 4 0 25 0
  • In the second session, run dbms_capture_adm.prepare_table_instantiation
exec dbms_capture_adm.prepare_table_instantiation('T1','none');
  • You see that in that case, the dbms_capture_adm.prepare_table_instantiation second has finished. We can now drop the table and go to the next section.
drop table t1 purge;

Row-X TM Enqueue doesn't imply pending changes...

This last example shows Row-X TM Enqueue doesn't actually imply any row change. Like in the first example, we'll open 2 sessions and run a small scenario to illustrate the point and the impact on preparing the instantiation of the table.

  • In the first session, create a table with data. Then run an update changing NO rows and leave that session opened without commiting the transaction; you can check there is a Row-X TM enqueue on the table:
create table T1 (col1 number);

insert into T1 values (1);

commit;

update T1 set col1=1 where 0=1;

select sid,
type,
lmode,
request,
ctime,
block
from gv$lock
where id1 in (
select object_id
from user_objects
where object_name='T1')
and sid=sys_context('USERENV','SID');

SID TY LMODE REQUEST CTIME BLOCK
--- -- ----- ------- ----- -----
170 TM 3 0 80 0
  • In the second session, run dbms_capture_adm.prepare_table_instantiation
exec dbms_capture_adm.prepare_table_instantiation('T1','none');
  • As expected, the second session waits, though no change have been actually done, because of the WHERE 1=0 clause of the update; You can roll back (or commit) the changes in the first session, dbms_capture_adm.prepare_table_instantiation will finish.
rollback;
  • Drop the table to finish that section
drop table t1 purge;

Conclusion

Think twice before you run dbms_capture_adm.prepare_xxxx_instantiation!


Read more...

Tuesday, February 3, 2009

Setting Up a Downstream Real-Time Capture with RMAN 101

Oracle Streams downstream capture is probably one of most commonly used Streams configuration, whether its for a long time replication or for a short period of time like during a database upgrade. The limited impact on the source database advocates for that configuration; That's because, it only requires to add supplemental logging and to capture the dictionary from it the primary database.

On the other hand, Oracle Recovery Manager, or RMAN, is one of the most common way to clone a database and it can easily manage several hundreds of Gigabytes, or more. It's probably one of the number one tools for most Oracle Database Administrator.

This post shows a simple setup of Streams Real Time Downstream Capture instantiated with the help of RMAN; like in the first post of this blog "Oracle Streams One Way Table Replication 101", we will focus on keeping everything simple and we'll use many shortcuts to get to the result. However, you should be able fill the gap with your needs pretty easily and you'll get all the details in the documentation. Regarding that specific configuration, you can start with Oracle® Streams Concepts and Administration 11g Release 1 (11.1) B Online Database Upgrade with Oracle Streams.

Lets start... We'll build the configuration shown in the graphic below:


Note
The above configuration requires Oracle Database Enterprise Edition
We'll assume we have already have a database and its instance; they are both named BLACK in what follow. We'll get into all the steps related to the demo setup, from the creation of the source schema to the testing of the working configuration; here are the related sections:

2 Words About the Source Database and its schema

The source database and its instance are both named BLACK. The script below creates a DEMO schema with 2 tables T1 and T2 that have primary keys and which are fully supported by Streams:

connect / as sysdba

create user demo
identified by demo
default tablespace users
temporary tablespace temp;

grant connect, resource to demo;

connect demo/demo

create table t1 (
id number primary key,
lib varchar2(10));

create table t2 (
id number primary key,
lib varchar2(10));

insert into demo.t1
values (1,'data 1');

insert into demo.t1
values (2,'data 2');

insert into demo.t2
values (1,'data 1');

commit;
Note
To ease the work, we assume there is no unsupported type in the application schema (see DBA_STREAMS_UNSUPPORTED) and all the tables have a primary a unique key.
Configure Database Logging and Prepare the Multi-Version Data Dictionary

For more details about the requirements of a Streams Source Database, you can refer to the related section of the 1st post of this blog. To make it short, you need the database to be in archivelog mode and to configure supplemental logging:
connect / as sysdba

alter database add supplemental log
data (primary key, unique index) columns;

select LOG_MODE,
SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from v$database;

LOG_MODE SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG IMPLICIT YES YES
You also need to a copy of the data dictionary from the source to the destination database. To do it, we need to run DBMS_CAPTURE_ADM.BUILD as below:
var first_scn number;

set serveroutput on

DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
/

First SCN Value = 319193
It is important to keep track of the SCN returned by DBMS_CAPTURE_ADM.BUILD. You'll use that SCN as the first SCN of the capture processes so that the meta data from the source database can be pushed and kept in the destination MVDD. Note that the last command captures the minimum to add to the MVDD and you have to add some additional definition for the objects you want to replicate. You should use one of the prepare_xxxx_instantiation procedure of the dbms_capture_adm package to add the related definitions in the redologs:
exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'demo.t1');

exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'demo.t2');

Duplicate you database with RMAN

You can use the method of your choice to instanciate the data on the destination database. What matters is that (1) you know the SCN of the table copies and (2) that SCN called the instanciation SCN is after the first SCN returned by DBMS_CAPTURE_ADM.BUILD. In what follows, we'll use Recovery Manager to get to that; we'll fully leverage the 11g new DUPLICATE FROM ACTIVE DATABASE command.
Note
In that example, we'll duplicate BLACK to build a new database named WHITE on the same server; Because the 2 databases will run on the same server, we'll rename the directories /.../BLACK into /.../WHITE.
To duplicate the database, we can run the following operations:
  • Configure the service for the new Instance; On Linux, the command looks like below:
echo "WHITE:$ORACLE_HOME:N" >>/etc/oratab
  • Setup the environment for the new instance
. oraenv
WHITE
  • Create a password file with the same password than for the BLACK instance:
cd $ORACLE_HOME
orapwd file=orapwWHITE password=change_on_install ignorecase=Y
  • Statically register the WHITE instance in its local listener, via the listener.ora file. It allows to connect to WHITE with a SYSDBA user from a remote location even when the instance is down.
cd $ORACLE_HOME/network/admin

$ grep -A 6 SID_LIST_LISTENER listener.ora
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/11.1.0/db_1)
(SID_NAME = WHITE)
)
)

lsnrctl reload listener
  • Create an alias for the 2 instances in the tnsnames.ora file:
cd $ORACLE_HOME/network/admin

$ cat tnsnames.ora

BLACK =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))
)
(CONNECT_DATA= (SID=BLACK)
)
)

WHITE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))
)
(CONNECT_DATA= (SID=WHITE)
)
)
  • Create an init.ora file to start the new instance. Note that you can do it with 4 parameters only (even 3 if you use the 11g memory_target parameter); The spfile will be rebuilt from the BLACK one during the DUPLICATE operation anyway:
cd $ORACLE_HOME/dbs

echo "db_name=WHITE" >initWHITE.ora
echo "pga_aggregate_target=160M" >>initWHITE.ora
echo "processes=100" >>initWHITE.ora
echo "sga_target=250M" >>initWHITE.ora
  • If the source database uses some directories that are not managed automatically by OMF or ADR, you'll have to create the corresponding directories for the new database
# Corresponding destination for non-OMF datafiles
mkdir -p /u01/app/oracle/oradata/WHITE

# Corresponding destination for archivelogs
mkdir -p /u01/app/oracle/oradata/WHITE/archivelogs

# Audit File Destination
mkdir -p $ORACLE_BASE/admin/WHITE/adump
  • Connect to both source and destination database with RMAN:
$ORACLE_HOME/bin/rman
connect target sys@black
database Password:
connected to target database: BLACK (DBID=361377223)

connect auxiliary sys@white
auxiliary database Password:
connected to auxiliary database (not started)
  • Start the WHITE instance in nomount mode:
startup auxiliary nomount;
  • Run the DUPLICATE command:
DUPLICATE TARGET DATABASE
TO WHITE
FROM ACTIVE DATABASE
DB_FILE_NAME_CONVERT 'BLACK','WHITE'
SPFILE
PARAMETER_VALUE_CONVERT 'BLACK','WHITE'
SET PGA_AGGREGATE_TARGET = '150M'
SET SGA_TARGET = '250M'
SET LOG_FILE_NAME_CONVERT 'BLACK','WHITE';
  • exit RMAN, the DUPLICATE operation is done!
exit;
Create the Streams Administrator and the Streams Queue

We'll create the Streams Administrator and the Streams Queue on the WHITE database; for a more detailed explanation, refer to the corresponding section of my previous post:
. oraenv
WHITE

sqlplus / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE '/u01/app/oracle/oradata/WHITE/streams_tbs01.dbf'
SIZE 25M AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

exit;
Note
In the case of a downsteam capture, there is no need to create a Streams Administrator on the source database. There is no need either to create a database link from or to the source database and the global_names parameter doesn't have to be set to true.

Setup the Log Transport Service for Downstream Capture

To allow the redo logs to be shipped from BLACK to WHITE:
  • Create standby redo logs of the size of the BLACK database on WHITE.
  • Change the log_archive_dest_n parameters of the BLACK instance so that the logs are shipped in asynchronous mode to WHITE.
To start, we'll check the size of the redolog files on BLACK:
. oraenv
BLACK

sqlplus / as sysdba

select thread#, sequence#, bytes
from v$log
order by 1,2;

THREAD# SEQUENCE# BYTES
------- --------- ----------
1 17 52428800
1 18 52428800
1 19 52428800

exit;
Then we'll create standby redologs of the same size on WHITE:
. oraenv
WHITE

sqlplus / as sysdba

alter database add standby logfile thread 1 group 21
('/u01/app/oracle/oradata/WHITE/redo21_1.log') SIZE 52428800;
alter database add standby logfile thread 1 group 22
('/u01/app/oracle/oradata/WHITE/redo22_1.log') SIZE 52428800;
alter database add standby logfile thread 1 group 23
('/u01/app/oracle/oradata/WHITE/redo23_1.log') SIZE 52428800;

exit;
Once the standby redologs created, we can setup the log transport service on the BLACK instance:
. oraenv
BLACK

sqlplus / as sysdba

alter system set log_archive_dest_2=
'service=WHITE lgwr async noregister valid_for=(online_logfiles,all_roles) db_unique_name=WHITE';

alter system set log_archive_config=
'dg_config=(BLACK,WHITE)';

alter system set log_archive_dest_state_2=enable;

exit

. oraenv
WHITE

sqlplus / as sysdba

alter system set log_archive_config=
'dg_config=(BLACK,WHITE)';

exit
Note:
In the case of Real-Time Capture, redo logs are shipped by the LNS process to the RFS processes. The logs data are then stored in the standby redo logs files. The ARCH processes of the destination instance archive the standby redo logs; As a consequence you should not use the template clause.
Create the Downstream Capture

The downstream capture creation is exactly like the capture creation on the source database. The only difference is the source database name that, in this case, is different from the actual capturing database:
connect strmadmin/strmadmin

BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'strmadmin.streams_queue',
capture_name => 'streams_capture',
rule_set_name => NULL,
source_database => 'BLACK',
use_database_link => false,
first_scn => 319193,
logfile_assignment => 'implicit');
END;
/
Once the capture created, we'll set its downstream_real_time_mine property to Y:
BEGIN
DBMS_CAPTURE_ADM.SET_PARAMETER(
capture_name => 'streams_capture',
parameter => 'downstream_real_time_mine',
value => 'Y');
END;
/
We can eventually add the capture rules:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t1',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t2',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/
Specify the Instanciation SCN and Create the Apply

We need to specify the SCN of the tables so that the Apply knows what transaction it should start to apply. In the case of the RMAN duplicate command, that's the SCN right before the open resetlogs:
select INCARNATION#,RESETLOGS_TIME,RESETLOGS_CHANGE#
from v$database_incarnation
order by 1;

INCARNATION# RESETLOGS RESETLOGS_CHANGE#
------------ --------- -----------------
1 26-JAN-09 1
2 03-FEB-09 329645
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'demo.t1',
source_database_name => 'BLACK',
instantiation_scn => 329644);
end;
/

begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'demo.t2',
source_database_name => 'BLACK',
instantiation_scn => 329644);
end;
/

col SOURCE_DATABASE format a6
col OBJECT format a10
col INSTANTIATION_SCN format 999999999999

select source_database,
source_object_owner||'.'||source_object_name object,
instantiation_scn
from dba_apply_instantiated_objects;

SOURCE OBJECT INSTANTIATION_SCN
------ ---------- -----------------
BLACK DEMO.T1 329644
BLACK DEMO.T2 329644
We can create the Apply like below. In that case, creating the rules implicitly create the associated Apply if it doesn't exist:
-- Create the Apply and add the DEMO.T1 table
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t1',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t2',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/
The configuration is over...

Start And Monitor Oracle Streams

Once Streams Capture and Apply created, start both of them:
exec dbms_capture_adm.start_capture('streams_capture');

exec dbms_apply_adm.start_apply('streams_apply');

Before we test changes on the source database are replicated as expected to the destination server, we can check the status of the components involved, we'll start by checking the status of the Log Transport Service on the BLACK instance and manage any error:
show parameter log_archive_dest_2
show parameter log_archive_dest_state_2

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

col dest_id format 99
col timestamp format a20
col message format a90 wor wra
set pages 1000
set lines 120
select dest_id,
timestamp,
message
from v$dataguard_status
order by timestamp;

select process,
pid,
status,
sequence#,
thread#
from v$managed_standby;
You can check that the files sent by BLACK are registered as expected on WHITE; connect to the WHITE database and execute:
select process,
status,
client_process,
sequence#,
thread#
from v$managed_standby;

set lines 120
set pages 1000
col consumer_name format a15
col name format a65 wor wra
col source format a6
col t# format 99
col seq# format 9999999

select consumer_name ,
source_database source,
thread# t#,
sequence# seq#,
first_scn,
name,
purgeable
from dba_registered_archived_log
order by source_database, first_scn;
You can also check the status of the standby logs on WHITE:
set lines 120
set pages 1000
col dbid format 999999999999999
col bytes format 999,999,999,999
col first_change# format 999999999999999
col sequence# format 9999999

select dbid,
sequence#,
bytes,
status,
first_change#
from v$standby_log;
The streams capture should be running without any error on WHITE:
set lines 120
set pages 1000
col capture_name format a15
col status format a7
col source format a6
col error_message format a85 wor wra

select capture_name,
status,
source_database source,
error_message
from dba_capture;

set lines 120
set pages 1000
col capture_name format a15
col state format a50 wor wra
col sysdate format a20
col capture_time format a20
col total_messg format 99999999999
col apply_name format a15

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

select capture_name,
state,
capture_time,
total_messages_created total_messg,
apply_name
from v$streams_capture;
The apply should be running without any error on WHITE too:

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

set lines 120
set pages 1000
col apply_name format a15
col status format a7
col status_change_time format a20
col error_message format a60 wor wra

select apply_name,
status,
status_change_time,
error_message
from dba_apply;

set lines 120
set pages 1000
col apply_name format a15
col source_database format a7
col sysdate format a20
col apply_time format a20
col applied_message_create_time format a20

select apply_name,
source_database,
sysdate,
apply_time,
applied_message_create_time
from dba_apply_progress;
If there is any remaining error, you'll have to correct it ;-).

Test the Replication

Testing the replication is pretty easy, connect to the BLACK database and run some DML and DDL commands:
sqlplus demo/demo@BLACK

insert into t1
values (3,'data 3');

commit;

select * from t1;

ID LIB
-- ------
3 data 3
1 data 1
2 data 2

alter table t2 add (new_column number);

update t2 set new_column=2;

commit;

select * from t2;

ID LIB NEW_COLUMN
-- ------ ----------
1 data 1 2

exit;
Once the changes performed, you can check they are applied to the destination:
sqlplus demo/demo@WHITE

select * from t1;

ID LIB
-- ------
3 data 3
1 data 1
2 data 2

select * from t2;

ID LIB NEW_COLUMN
-- ------ ----------
1 data 1 2

exit;
You see how easy it is!

Other Considerations

What next? Because simplest part of the story has actually finished here!

We can easily imagine you'll use Streams Downstream Capture to upgrade a database with a new Release or Patch Set. If that's the case, consider using the dynamic service registration to make the application connect to your new target seamlessly. Obviously I didn't say how to deal with non-supported types or operations. Or eventually, how to build a really advanced rollback scenario. That's a totally different story... in a totally different dimension.

Concerning this hands-on, we'll finish it by cleaning up our environment, i.e. disable the Log Transport Service, remove the supplemental logging from BLACK, drop the WHITE database and its instance and drop the DEMO schema; The script below shows how to deal with those operations:
. oraenv
BLACK

sqlplus / as sysdba

alter system reset log_archive_dest_2;
alter system reset log_archive_dest_state_2;
alter system reset log_archive_config;

alter system set log_archive_dest_2='' scope=memory;
alter system set log_archive_config='' scope=memory;

alter database drop supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

drop user demo cascade;

exit;

. oraenv
WHITE

$ORACLE_HOME/bin/rman target /
startup force mount
sql 'alter system enable restricted session';
drop database including backups;

exit;

###################################
# Delete Remaining Database Files #
###################################
# Delete the entries in $ORACLE_HOME/dbs
rm -f $ORACLE_HOME/dbs/*WHITE*

# Delete the Streams Registered Archived Logs
# (RMAN doesn't do it)
rm -f /u01/app/oracle/oradata/WHITE/archivelogs/*

# Delete the database Directories
rm -rf $ORACLE_BASE/oradata/WHITE
rm -rf $ORACLE_BASE/admin/WHITE
rm -rf $ORACLE_BASE/diag/rdbms/white

# Delete the WHITE instance from /etc/oratab
ed /etc/oratab <<EOF
/^WHITE/d
wq
EOF

# Remove the entries from the network configuration
# like tnsnames.ora and listener.ora manually
vi $ORACLE_BASE/network/admin/tnsnames.ora
vi $ORACLE_BASE/network/admin/listener.ora

lsnrctl reload LISTENER
Leave your comments if you find this post useful or confusing. Let me know if you find any typos!

Read more...