Thursday, December 23, 2010

Post 25 of series,Configure discard file,exception table and stored procedure in goldengate replication with Tutorial

For the Gurmukhs, the Way is obvious. At the Lord's Door, they face no obstructions. They praise the Lord's Name, they keep the Naam in their minds, and they remain attached to the Love of the Naam. The Unstruck Celestial Music vibrates for them at the Lord's Door, and they are honored at the True Door.


Index to all the posts of Gurpartap Singh's Blog


Lets configure discardfile and then convert to Exception table and then will use a stored procedure from within goldengate replication. It depends what you want to use but I like exception table as it gives output in a oracle tabular form and so easy to isolate particular set
of sqls by just using simple sql commands.


Lets create simple tables and data to do this type of replication.

create table employee (
emp_id number,
firstname varchar2(20),
lastname varchar2(20)
);

alter table employee add constraint PK_employee primary key (emp_id);

create table location (
location_id number,
province varchar2(20),
city varchar2(20)
);

alter table location add constraint pk_location primary key (location_id);

Add to to goldengate replication use following:

DBLOGIN USERID goldengate, PASSWORD goldengate
ADD TRANDATA goldengate.employee
ADD TRANDATA goldengate.location


ADD EXTRACT etest, TRANLOG, BEGIN NOW
ADD EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/test/t1, EXTRACT etest


EDIT PARAMS etest


EXTRACT etest
USERID goldengate@simarsv1, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
RMTHOST goldengate, MGRPORT 7909
EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/test/t1
TRANLOGOPTIONS ASMUSER sys@RAC1ASM, ASMPASSWORD AACAAAAAAAAAAAJASAOBIHRGNFGAYCKBHJPAGIXFGGBJUAXI, ENCRYPTKEY default
TABLE gurpartap.employee;
TABLE gurpartap.location;



ptest
ADD EXTRACT ptest, EXTTRAILSOURCE /u01/app/oracle/product/goldengate/dirdat/test/p1

ADD RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/ts, EXTRACT ptest

EDIT PARAMS ptest

EXTRACT ptest
RMTHOST goldengate, MGRPORT 7909
RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/ts
PASSTHRU
TABLE gurpartap.employee;
TABLE gurpartap.location;




In logs its:


SQL> conn goldengate/goldengate
Connected.
SQL> create table employee (
emp_id number,
firstname varchar2(20),
lastname varchar2(20)
);
2 3 4 5
Table created.

SQL> alter table employee add constraint PK_employee primary key (emp_id);

Table altered.

SQL> create table location (
location_id number,
province varchar2(20),
city varchar2(20)
); 2 3 4 5

Table created.

SQL> alter table location add constraint pk_location primary key (location_id);

Table altered.

SQL>




GGSCI (rac1.rac.meditate.com) 7> DBLOGIN USERID goldengate, PASSWORD goldengate
Successfully logged into database.


GGSCI (rac1.rac.meditate.com) 10> ADD TRANDATA goldengate.employee

Logging of supplemental redo data enabled for table GOLDENGATE.EMPLOYEE.

GGSCI (rac1.rac.meditate.com) 11> ADD TRANDATA GOLDENGATE.location

Logging of supplemental redo data enabled for table GOLDENGATE.LOCATION.

GGSCI (rac1.rac.meditate.com) 12>



--------

GGSCI (rac1.rac.meditate.com) 1> ADD EXTRACT etest, TRANLOG, BEGIN NOW
EXTRACT added.


GGSCI (rac1.rac.meditate.com) 2> ADD EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl, EXTRACT etest
EXTTRAIL added.


GGSCI (rac1.rac.meditate.com) 3> EDIT PARAMS etest

EXTRACT etest
SETENV (ORACLE_SID=SIMAR1)
USERID goldengate@simarsv1, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
TRANLOGOPTIONS ASMUSER sys@RAC1ASM, ASMPASSWORD AACAAAAAAAAAAAJASAOBIHRGNFGAYCKBHJPAGIXFGGBJUAXI, ENCRYPTKEY default
RMTHOST goldengate, MGRPORT 7909
EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/test/t1
TABLE goldengate.employee;
TABLE goldengate.location;


GGSCI (rac1.rac.meditate.com) 4> ADD EXTRACT ptest, EXTTRAILSOURCE /u01/app/oracle/product/goldengate/dirdat/test/t1
EXTRACT added.


GGSCI (rac1.rac.meditate.com) 5> ADD RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/ts, EXTRACT ptest
RMTTRAIL added.


GGSCI (rac1.rac.meditate.com) 6> EDIT PARAMS ptest



EXTRACT ptest
RMTHOST goldengate, MGRPORT 7909
RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/ts
PASSTHRU
TABLE goldengate.employee;
TABLE goldengate.location;



Now lets start the extracts as:


GGSCI (rac1.rac.meditate.com) 4> start extract etest

Sending START request to MANAGER ...
EXTRACT ETEST starting


GGSCI (rac1.rac.meditate.com) 5> start extract ptest

Sending START request to MANAGER ...
EXTRACT PTEST starting


GGSCI (rac1.rac.meditate.com) 6>



Now lets create the definition file as:


cd /u01/app/oracle/product/goldengate/dirprm
vi testplace.prm

DEFSFILE /u01/app/oracle/product/goldengate/dirdat/test
USERID goldengate@simarsv1, PASSWORD AACAAAAAAAAAAALAJAVCJBLGSBEABDQIAAVAUCVJUHDGSFAJ, ENCRYPTKEY default
TABLE goldengate.employee;
TABLE goldengate.location;

cd /u01/app/oracle/product/goldengate
./defgen paramfile dirprm/testplace.prm


Output is as:

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ cd /u01/app/oracle/product/goldengate/dirprm

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate/dirprm
$ vi testplace.prm

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate/dirprm
$ cd /u01/app/oracle/product/goldengate

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ ./defgen paramfile dirprm/testplace.prm

***********************************************************************
Oracle GoldenGate Table Definition Generator for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 18 2009 00:09:13

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.


Starting at 2010-12-13 23:54:16
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Fri Oct 29 14:21:16 EDT 2010, Release 2.6.18-194.26.1.el5
Node: rac1.rac.meditate.com
Machine: x86_64
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited

Process id: 9422

***********************************************************************
** Running with the following parameters **
***********************************************************************
DEFSFILE /u01/app/oracle/product/goldengate/dirdat/test
USERID goldengate@simarsv1, PASSWORD ************************************************* ENCRYPTKEY default
TABLE goldengate.employee;
Retrieving definition for GOLDENGATE.EMPLOYEE
TABLE goldengate.location;
Retrieving definition for GOLDENGATE.LOCATION



Definitions generated for 2 tables in /u01/app/oracle/product/goldengate/dirdat/dirdat/test


oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$

Now scp it to the target as:

scp /u01/app/oracle/product/goldengate/dirdat/dirdat/test goldengate:/u01/app/oracle/product/goldengate/dirdat/dirdat/test


Output is as:

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ scp /u01/app/oracle/product/goldengate/dirdat/dirdat/test goldengate:/u01/app/oracle/product/goldengate/dirdat/dirdat/test

Password:
test 100% 1237 1.2KB/s 00:00

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$

---------------------------------------------------



Now in target lets create the data as:


create table remployee (
emp_id number,
firstname varchar2(20),
lastname varchar2(20)
);

alter table remployee add constraint PK_remployee primary key (emp_id);

create table rlocation (
location_id number,
province varchar2(20),
city varchar2(20)
);

alter table rlocation add constraint pk_rlocation primary key (location_id);


Output is as:

SQL> conn goldengate/goldengate
Connected.
SQL> create table remployee (
emp_id number,
firstname varchar2(20),
lastname varchar2(20)
);
2 3 4 5
Table created.

SQL> alter table remployee add constraint PK_remployee primary key (emp_id);

Table altered.

SQL> create table rlocation (
location_id number,
province varchar2(20),
city varchar2(20)
); 2 3 4 5

Table created.

SQL> alter table rlocation add constraint pk_rlocation primary key (location_id);


Table altered.

SQL> SQL>





Now Goldengate stuff is as:


ADD REPLICAT rtest, EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/ts




EDIT PARAMS rtest

REPLICAT rtest
SOURCEDEFS /u01/app/oracle/product/goldengate/dirdat/test
DISCARDFILE /u01/app/oracle/product/goldengate/dirdat/test, purge
DISCARDROLLOVER ON sunday
USERID goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
REPERROR (DEFAULT, DISCARD)
GETINSERTS
GETUPDATES
GETDELETES
MAP goldengate.employee, TARGET goldengate.remployee;
MAP goldengate.location, TARGET goldengate.rlocation;



Output is as:
GGSCI (goldengate) 1> ADD REPLICAT rtest, EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/ts
REPLICAT added.


GGSCI (sedm1610) 2>
GGSCI (sedm1610) 4> start replicat RTEST

Sending START request to MANAGER ...
REPLICAT RTEST starting


GGSCI (sedm1610) 5>




TEST 1:
On source:
select * from employee;
select * from location;
insert into employee values (1,'GURPARTAP','SINGH');
insert into location values (1,'BRITISH COLUMBIA','WHISTLER');
commit;
select * from employee;
select * from location;

On target:
These will be replicated as:
select * from remployee;
select * from rlocation;


Results as expected:

On source:

SQL> select * from employee;
select * from location;
no rows selected

SQL>

no rows selected

SQL>
SQL> insert into employee values (1,'GURPARTAP','SINGH');

1 row created.

SQL> insert into location values (1,'BRITISH COLUMBIA','WHISTLER');

1 row created.

SQL> commit:
2
SQL> commit;

Commit complete.

SQL> select * from employee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH

1 row selected.

SQL> select * from location;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER

1 row selected.

SQL>


On target:

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH

1 row selected.

SQL> select * from rlocation;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER

1 row selected.

SQL>




TEST2:
On target:
select * from remployee;
insert into remployee values (2,'SIMAR','SINGH');
commit;
select * from remployee;

On source:
select * from employee;
select * from location;
insert into employee values (2,'SIMAR','SINGH');
insert into location values (2,'BRITISH COLUMBIA','VANCOUVER');
commit;
select * from employee;
select * from location;


On target:
These will be replicated as:
select * from remployee;
select * from rlocation;


Results are not as expected:

on target:

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH

1 row selected.

SQL> insert into remployee values (2,'SIMAR','SINGH');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
2 SIMAR SINGH
1 GURPARTAP SINGH

2 rows selected.

SQL>

On source:

SQL> insert into employee values (2,'SIMAR','SINGH');

1 row created.

SQL> insert into location values (2,'BRITISH COLUMBIA','VANCOUVER');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from employee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL> select * from location;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER
2 BRITISH COLUMBIA VANCOUVER

2 rows selected.

SQL>


On target:

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
2 SIMAR SINGH
1 GURPARTAP SINGH

2 rows selected.

SQL> select * from rlocation;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER
2 BRITISH COLUMBIA VANCOUVER

2 rows selected.

SQL>


The discard file shows:

oracle : goldengate : @goldengate : /u01/app/oracle/product/goldengate
$ cat /u01/app/oracle/product/goldengate/dirdat/distest
Oracle GoldenGate Delivery for Oracle process started, group RTEST discard file opened: 2010-12-14 20:19:22

Current time: 2010-12-14 20:28:47

OCI Error ORA-00001: unique constraint (GOLDENGATE.PK_REMPLOYEE) violated (status = 1), SQL
Operation failed at seqno 2 rba 971
Discarding record on action DISCARD on error 1
Problem replicating GOLDENGATE.EMPLOYEE to GOLDENGATE.REMPLOYEE
Error (1) occurred with insert record (target format)...
*
EMP_ID = 2
FIRSTNAME = SIMAR
LASTNAME = SINGH
*

oracle : goldengate : @goldengate : /u01/app/oracle/product/goldengate
$



Note that even though we inserted 2 rows in two different tables in source in one single transaction but on target the transaction doesn't fail even though we have one of the row already existing, Goldengate inserts the other row in the other table and writes the existing row
to the discard file.





------------------------------------------------------

Now lets use the exception table instead of discard file as:

-- handle exception
MAP goldengate.employee, TARGET goldengate.employee_exception, EXCEPTIONSONLY, &
INSERTALLRECORDS , &
COLMAP (USEDEFAULTS, &
SOURCE_TABLE = @GETENV ("GGHEADER", "TABLENAME"), &
TARGET_TABLE = "GOLDENGATE.REMPLOYEE", &
DML_DATE = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
OPTYPE = @GETENV ("GGHEADER", "OPTYPE"), &
DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &
DBERRMSG = @GETENV("LASTERR", "DBERRMSG"));

-- handle exception
MAP GOLDENGATE.location, TARGET GOLDENGATE.location_exception, EXCEPTIONSONLY, &
INSERTALLRECORDS , &
COLMAP (USEDEFAULTS, &
SOURCE_TABLE = @GETENV ("GGHEADER", "TABLENAME"), &
TARGET_TABLE = "GOLDENGATE.RLOCATION", &
DML_DATE = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
OPTYPE = @GETENV ("GGHEADER", "OPTYPE"), &
DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &
DBERRMSG = @GETENV("LASTERR", "DBERRMSG"));


So, new parameter file looks like:

oracle : goldengate : @goldengate : /u01/app/oracle/product/goldengate/dirprm
$ cat rtest.prm
REPLICAT rtest
SOURCEDEFS /u01/app/oracle/product/goldengate/dirdat/test
DISCARDFILE /u01/app/oracle/product/goldengate/dirdat/distest, purge
DISCARDROLLOVER ON sunday
--- Please note here goldengate is the name of the target machine, target database and taget database user_id.
USERID goldengate@goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
REPERROR (DEFAULT, EXCEPTION)
GETINSERTS
GETUPDATES
GETDELETES
MAP gurpartap.employee, TARGET gurpartap.remployee;
-- handle exception
MAP gurpartap.employee, TARGET gurpartap.employee_exception, EXCEPTIONSONLY, &
INSERTALLRECORDS , &
COLMAP (USEDEFAULTS, &
SOURCE_TABLE = @GETENV ("GGHEADER", "TABLENAME"), &
TARGET_TABLE = "GURPARTAP.REMPLOYEE", &
DML_DATE = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
OPTYPE = @GETENV ("GGHEADER", "OPTYPE"), &
DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &
DBERRMSG = @GETENV("LASTERR", "DBERRMSG"));
-- new
REPERROR (DEFAULT, EXCEPTION)
GETINSERTS
GETUPDATES
GETDELETES
MAP gurpartap.location, TARGET gurpartap.rlocation;
-- handle exception
MAP gurpartap.location, TARGET gurpartap.location_exception, EXCEPTIONSONLY, &
INSERTALLRECORDS , &
COLMAP (USEDEFAULTS, &
SOURCE_TABLE = @GETENV ("GGHEADER", "TABLENAME"), &
TARGET_TABLE = "GURPARTAP.RLOCATION", &
DML_DATE = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
OPTYPE = @GETENV ("GGHEADER", "OPTYPE"), &
DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &
DBERRMSG = @GETENV("LASTERR", "DBERRMSG"));

and we need to create the exception tables as:

create table GURPARTAP.LOCATION_EXCEPTION (
LOCATION_ID number,
PROVINCE varchar2(20),
CITY varchar2(20),
SOURCE_TABLE varchar2(100),
TARGET_TABLE varchar2(100),
DML_DATE date,
OPTYPE varchar2(20),
DBERRNUM number,
DBERRMSG VARCHAR2(4000)
);



create table GURPARTAP.EMPLOYEE_EXCEPTION (
emp_id number,
firstname varchar2(20),
lastname varchar2(20),
SOURCE_TABLE varchar2(100),
TARGET_TABLE varchar2(100),
DML_DATE date,
OPTYPE varchar2(20),
DBERRNUM number,
DBERRMSG VARCHAR2(4000)
);

Output is as:

SQL> show user
USER is "GURPARTAP"
SQL> create table GURPARTAP.EMPLOYEE_EXCEPTION (
2 emp_id number,
3 firstname varchar2(20),
4 lastname varchar2(20),
5 SOURCE_TABLE varchar2(100),
6 TARGET_TABLE varchar2(100),
7 DML_DATE date,
8 OPTYPE varchar2(20),
9 DBERRNUM number,
10 DBERRMSG VARCHAR2(4000)
11 );

Table created.

SQL> desc GURPARTAP.rlocation
Name Null? Type
----------------------------------------------------------------- -------- --------------------------------------------
LOCATION_ID NOT NULL NUMBER
PROVINCE VARCHAR2(20)
CITY VARCHAR2(20)

SQL> create table GURPARTAP.LOCATION_EXCEPTION (
2 LOCATION_ID number,
3 PROVINCE varchar2(20),
4 CITY varchar2(20),
5 SOURCE_TABLE varchar2(100),
6 TARGET_TABLE varchar2(100),
7 DML_DATE date,
8 OPTYPE varchar2(20),
9 DBERRNUM number,
10 DBERRMSG VARCHAR2(4000)
11 );

Table created.

SQL> select * from rlocation;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER

1 row selected.

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH

1 row selected.

SQL>






Lets repeat test 2 again:

Before that I stopped both the extract and the replicat and truncated all the 4 tables i.e
gurpartap.employee,gurpartap.location in source and gurpartap.rlocation,gurpartap.remployee in target and restarted the extracts and
replicates with begin now option as:
alter extract etest,begin now
alter extract ptest,begin now
start extract etest
start extract ptest
alter replicat rtest,begin now
start replicat rtest





TEST2:
On target:
select * from remployee;
insert into remployee values (2,'SIMAR','SINGH');
commit;
select * from remployee;

On source:
select * from employee;
select * from location;
insert into employee values (2,'SIMAR','SINGH');
insert into location values (2,'BRITISH COLUMBIA','VANCOUVER');
commit;
select * from employee;
select * from location;


On target:
These will be replicated as:
select * from remployee;
select * from rlocation;

Results are same as earlier:
Even though one row failed and written to the exception table,
it replicated the other row. Both the rows were inserted in the same
transaction on source.



Here is the output:

On target:
SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH

1 row selected.

SQL> insert into remployee values (2,'SIMAR','SINGH');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL>


On source:
SQL> insert into employee values (2,'SIMAR','SINGH');

1 row created.

SQL> insert into location values (2,'BRITISH COLUMBIA','VANCOUVER');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from employee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL> select * from location;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER
2 BRITISH COLUMBIA VANCOUVER

2 rows selected.

SQL>


On target:


SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL> select * from rlocation;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER
2 BRITISH COLUMBIA VANCOUVER

2 rows selected.


And it wrote that failed entry in the exceptions table as:



SQL> select * from GURPARTAP.EMPLOYEE_EXCEPTION ;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
SOURCE_TABLE
----------------------------------------------------------------------------------------------------
TARGET_TABLE DML_DATE
---------------------------------------------------------------------------------------------------- ---------
OPTYPE DBERRNUM
-------------------- ----------
DBERRMSG
------------------------------------------------------------------------------------------------------------------------
2 SIMAR SINGH
GURPARTAP.EMPLOYEE
GURPARTAP.REMPLOYEE 15-DEC-10
INSERT 1
OCI Error ORA-00001: unique constraint (GURPARTAP.PK_REMPLOYEE) violated (status = 1), SQL


1 row selected.

SQL>




Now we will create a stored procedure to let it insert in another table i.e. the third table in the tagert.

I will create a third table in the target now as:

create table start_date (
emp_id number,
start_date date
);


==============

CREATE OR REPLACE PROCEDURE ckeck_insert (emp_id NUMBER) AS
BEGIN
insert into start_date values (emp_id,sysdate);
commit;
END ckeck_insert;
/


Then to test if its working use:

CALL ckeck_insert (3);



Output is as:

SQL> create table start_date (
emp_id number,
start_date date
);

2 3 4
Table created.

SQL> SQL> show user
USER is "GURPARTAP"
SQL> CREATE OR REPLACE PROCEDURE ckeck_insert (emp_id NUMBER) AS
BEGIN
insert into start_date values (emp_id,sysdate);
commit;
END ckeck_insert;
/
2 3 4 5 6
Procedure created.

SQL> CALL ckeck_insert (3);

Call completed.

SQL> select * from start_date;

EMP_ID START_DAT
---------- ---------
3 15-DEC-10

1 row selected.

SQL>


Now, I will truncate this table.

SQL> truncate table start_date;

Table truncated.

SQL> select * from start_date;

no rows selected

SQL>


Please note the following in below parameter file:

sqlexec (spname gurpartap.ckeck_insert , id ckeck_insert1, &
params(EMP_ID=@IF (@COLTEST(emp_id, MISSING), @COLSTAT(NULL), emp_id)), ERROR RAISE, TRACE ALL);

spname -> is the name of the procedure you want to call.
id ckeck_insert1 -> is required of you are calling this procedure more than one time andin that case id should be differrent in every case like ckeck_insert2 for second.
params -> here we are passing paramter to the procedure and name of teh parameter is emp_id. Please note the name of teh parameter being passed here should be same as the variable name defined in the procedure. Other option is COLTEST and i.e. if the emp_id value is missing then take the null value otherwise take the value of emp_id.


Now I will update the replicat parameter file as:

REPLICAT rtest
SOURCEDEFS /u01/app/oracle/product/goldengate/dirdat/test
DISCARDFILE /u01/app/oracle/product/goldengate/dirdat/distest, purge
DISCARDROLLOVER ON sunday
--- Please note here goldengate is teh name of the target machine, target database and taget database user_id.
USERID goldengate@goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
REPERROR (DEFAULT, EXCEPTION)
GETINSERTS
GETUPDATES
GETDELETES
MAP gurpartap.employee, TARGET gurpartap.remployee, &
sqlexec (spname gurpartap.ckeck_insert , id ckeck_insert1, &
params(EMP_ID=@IF (@COLTEST(emp_id, MISSING), @COLSTAT(NULL), emp_id)), ERROR RAISE, TRACE ALL);
-- handle exception
MAP gurpartap.employee, TARGET gurpartap.employee_exception, EXCEPTIONSONLY, &
INSERTALLRECORDS , &
COLMAP (USEDEFAULTS, &
SOURCE_TABLE = @GETENV ("GGHEADER", "TABLENAME"), &
TARGET_TABLE = "GURPARTAP.REMPLOYEE", &
DML_DATE = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
OPTYPE = @GETENV ("GGHEADER", "OPTYPE"), &
DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &
DBERRMSG = @GETENV("LASTERR", "DBERRMSG"));
-- new
REPERROR (DEFAULT, EXCEPTION)
GETINSERTS
GETUPDATES
GETDELETES
MAP gurpartap.location, TARGET gurpartap.rlocation;
-- handle exception
MAP gurpartap.location, TARGET gurpartap.location_exception, EXCEPTIONSONLY, &
INSERTALLRECORDS , &
COLMAP (USEDEFAULTS, &
SOURCE_TABLE = @GETENV ("GGHEADER", "TABLENAME"), &
TARGET_TABLE = "GURPARTAP.RLOCATION", &
DML_DATE = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
OPTYPE = @GETENV ("GGHEADER", "OPTYPE"), &
DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &
DBERRMSG = @GETENV("LASTERR", "DBERRMSG"));




TEST1:




Lets repeat test 2 again:

Before that I stopped both the extract and the replicat and truncated all the 4 tables i.e
gurpartap.employee,gurpartap.location in source and gurpartap.rlocation,gurpartap.remployee in taeget and restarted the extracts and
replicats with begin now option as:
alter extract etest,begin now
alter extract ptest,begin now
start extract etest
start extract ptest
alter replicat rtest,begin now
start replicat rtest





TEST2:
On target:
select * from remployee;
insert into remployee values (2,'GURPARTAP','SINGH');
commit;
select * from remployee;

On source:
select * from employee;
select * from location;
insert into employee values (2,'SIMAR','SINGH');
insert into location values (2,'BRITISH COLUMBIA','VANCOUVER');
commit;
select * from employee;
select * from location;


On target:
These will be replicated as:
select * from remployee;
select * from rlocation;




Results are :
Even though one row failed and written to the exception table,
it replicated the other row. Both the rows were inserted in the same
transaction on source.



Here is the output:

On target :

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH

1 row selected.

SQL> insert into remployee values (2,'SIMAR','SINGH');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL>


On source:

SQL> insert into employee values (2,'SIMAR','SINGH');

1 row created.

SQL> insert into location values (2,'BRITISH COLUMBIA','VANCOUVER');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from employee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL> select * from location;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER
2 BRITISH COLUMBIA VANCOUVER

2 rows selected.

SQL>


On target:


SQL> select * from remployee;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
1 GURPARTAP SINGH
2 SIMAR SINGH

2 rows selected.

SQL> select * from rlocation;

LOCATION_ID PROVINCE CITY
----------- -------------------- --------------------
1 BRITISH COLUMBIA WHISTLER
2 BRITISH COLUMBIA VANCOUVER

2 rows selected.


And it wrote that failed entry in the exceptions table as:



SQL> select * from GURPARTAP.EMPLOYEE_EXCEPTION ;

EMP_ID FIRSTNAME LASTNAME
---------- -------------------- --------------------
SOURCE_TABLE
----------------------------------------------------------------------------------------------------
TARGET_TABLE DML_DATE
---------------------------------------------------------------------------------------------------- ---------
OPTYPE DBERRNUM
-------------------- ----------
DBERRMSG
------------------------------------------------------------------------------------------------------------------------
2 SIMAR SINGH
GURPARTAP.EMPLOYEE
GURPARTAP.REMPLOYEE 15-DEC-10
INSERT 1
OCI Error ORA-00001: unique constraint (GURPARTAP.PK_REMPLOYEE) violated (status = 1), SQL


1 row selected.

SQL>





SQL> select * from START_DATE;

EMP_ID START_DAT
---------- ---------
1 15-DEC-10
2 15-DEC-10

2 rows selected.

SQL>

Next we will create the primary key index on the table where procedure is inserting i.e. START_DATE and see how the inserts and updates will behave.

O Siblings of Destiny, one who becomes Gurmukh and understands this is very rare. To act without understanding is to lose the treasure of this human life. 1 Pause Those who have tasted it, enjoy its flavor; without tasting it, they wander in doubt, lost and deceived. The True
Name is the Ambrosial Nectar; no one can describe it. Drinking it in, one becomes honorable, absorbed in the Perfect Word of the Shabad.




Saturday, December 4, 2010

Post 24 of series, Goldengate initial load using Cross platform transportable tablespaces Solaris to Linux

Upon that cosmic plate of the sky, the sun and the moon are the lamps. The stars and their orbs are the studded pearls. The
fragrance of sandalwood in the air is the temple incense, and the wind is the fan. All the plants of the world are the altar flowers in offering to You, O Luminous Lord. 1 What a beautiful Aartee, lamp-lit worship service this is! O Destroyer of Fear, this is Your Ceremony of Light. The Unstruck Sound-current of the Shabad is the vibration of the temple drums.



Index of all the posts of Gurpartap Singh's Blog


In this post we will do initial load using cross platform transportable tablespaces. Our source is Solaris 64 bit with big endian format and
target is Linux x86 64bit with little endian format. We will use package "DBMS_FILE_TRANSFER" to transfer files from ASM to ASM from
source to target. Incase it would have been a same eindian format you don't need extra space to place datafiles before transfer. In this case we will
need to do conversion from big endian format to little endian format so we will need extra space to place converted files. Database versions on source and target are 10.2.0.4. On 11g you can use 'cp' command to transfer files from ASM to ASM from source to target.

When you convert files it will not effect the orignal files and will create new files after conversion.
I am using note: 371556.1

Lets do it now.

Create a single instance databae on source

Server:goldengate
Database name:DFSRDV

Create a test rac instance on target
Server:rac1/rac2/tac2
Database name: DFTGDV
Instance Names: DFTGDV1 (on rac1)
Instance Names: DFTGDV2 (on rac2)
Instance Names: DFTGDV3 (on rac3)


Following is the tnsentry for the target database:

DFTGDV =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 41521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 41521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac3-vip)(PORT = 41521))
)
(CONNECT_DATA =
(SERVICE_NAME = DFTGDV)
)
)


Following is the tnsentry for the source database:

DFSRDV =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = goldengate)(PORT = 41521))
)
(CONNECT_DATA =
(SID = DFSRDV)
(SERVER = DEDICATED)
)
)



Create a new tablespace in the source database for testing and then create a schema to use this tablespace as
a primay tablspace and add anoter 2 datafiles to it and then add some tables to this user. I will just give dba to this user
instead of doing grants one by one.


Create tablespace:
CREATE TABLESPACE migration DATAFILE '+ORADATA1_DG' SIZE 200M;

Create user:
create user migrate identified by migrate default tablespace migration temporary tablespace temp;
grant connect,resource,dba to migrate;

Alter tablespace:
alter tablespace migration add datafile '+ORADATA1_DG' SIZE 200M;
alter tablespace migration add datafile '+ORADATA1_DG' SIZE 200M;



Output is as:


oracle : goldengate : @DFSRDV : /home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Dec 3 18:59:54 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> CREATE TABLESPACE migration DATAFILE '+ORADATA1_DG' SIZE 200M;

Tablespace created.

SQL> create user migrate identified by migrate default tablespace migration temporary tablespace temp;

User created.

SQL> grant connect,resource,dba to migrate;

Grant succeeded.

SQL> alter tablespace migration add datafile '+ORADATA1_DG' SIZE 200M;

Tablespace altered.

SQL> alter tablespace migration add datafile '+ORADATA1_DG' SIZE 200M;

Tablespace altered.

SQL>






Now we should have at least 3 datafiles in this tablespace and this being the deault tablespace of the user:

col FILE_NAME format a90
col TABLESPACE_NAME format a20
select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MIGRATION';



Output is as:

SQL> col FILE_NAME format a90
col TABLESPACE_NAME format a20
select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MIGRATION';
SQL> SQL>
FILE_NAME TABLESPACE_NAME
------------------------------------------------------------------------------------- --------------------
+ORADATA1_DG/dfsrdv/datafile/migration.309.736802539 MIGRATION
+ORADATA1_DG/dfsrdv/datafile/migration.310.736802721 MIGRATION
+ORADATA1_DG/dfsrdv/datafile/migration.311.736802729 MIGRATION

SQL>




Now lets login as user migration and create some tables and data in there before we migrate:

conn migrate/migrate
create table transfer as select * from dba_tables;
select count(*) from transfer;

create table places as select * from dba_objects;
select count(*) from places;

Then lets verify that the tables are in this tablespce:

select table_name,tablespace_name from user_tables;

Here is the output:


SQL> conn migrate/migrate
Connected.
SQL> show user
USER is "MIGRATE"
SQL> create table transfer as select * from dba_tables;

Table created.

SQL> select count(*) from transfer;

COUNT(*)
----------
1589

SQL> create table places as select * from dba_objects;

Table created.

SQL> select count(*) from places;

COUNT(*)
----------
50602

SQL> commit;

Commit complete.

SQL> select table_name,tablespace_name from user_tables;

TABLE_NAME TABLESPACE_NAME
------------------------------ --------------------
PLACES MIGRATION
TRANSFER MIGRATION

SQL>


Now we are ready for the migration:
We will use ASM to ASM direct transfer using package DBMS_FILE_TRANSFER and since we are going to do cross platform migration with different
endian format i.e. [Solaris[tm] OE (64-bit)] to Linux [Linux x86 64-bit], we will have to use RMAN convert to to convert the endian format
from big to little as [Solaris[tm] OE (64-bit)] is big endian format and [Linux x86 64-bit] is small endian format.
Remeber that you need to read the document to see the limitations of using the transportable tablespaces, here I will discuss only few.

So what to use, exp/imp or expdp or impdp


Pre-steps:


From documentation we have:

Beginning with Oracle Database 10g Release 2, you can transport tablespaces
that contain XMLTypes, but you must use the IMP and EXP utilities,
not Data Pump. When using EXP, ensure that the CONSTRAINTS and TRIGGERS
parameters are set to Y (the default).

The following query returns a list of tablespaces that contain XMLTypes:

select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and
t.tablespace_name=p.tablespace_name and
x.owner=u.username;

Output is as:

SQL> select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and
t.tablespace_name=p.tablespace_name and
x.owner=u.username; 2 3 4 5

TABLESPACE_NAME
--------------------
USERS
SYSAUX

SQL>


We are going to migrate tablespace "MIGRATION" and that is not in the list so we can use datapump here.

Also, you cannot transport the SYSTEM tablespace or objects owned by the user SYS.


Remeber:
Opaque Types Types(such as RAW, BFILE, and the AnyTypes) can be transported, but
they are not converted as part of the cross-platform transport operation.
Their actual structure is known only to the application, so the application
must address any endianness issues after these types are moved to the new
platform.


Also:
Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable
using Data Pump but not the original export utility, EXP.


So, we are good to use expdb/impdp

==================================================================================================================

So, lets start with the process:

---------Check that the tablespace will be self contained:

conn / as sysdba
execute sys.dbms_tts.transport_set_check('MIGRATION', true);
--- Check for violations : select * from sys.transport_set_violations;


Output is as:

SQL> conn / as sysdba
Connected.
SQL> execute sys.dbms_tts.transport_set_check('MIGRATION', true);

PL/SQL procedure successfully completed.

SQL> select * from sys.transport_set_violations;

no rows selected

SQL>


If you have any rows returned then these violations must be resolved before the tablespaces
can be transported

But we are good to go.


----------The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export.

ALTER TABLESPACE MIGRATION READ ONLY;



Output is as:

SQL> ALTER TABLESPACE MIGRATION READ ONLY;

Tablespace altered.

SQL>




Now we will create the directory object to export the metadata of the tablespace as:

CREATE OR REPLACE DIRECTORY dpump_dir AS '/ora/fs0000/work/dpump_dir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

expdp system/system DUMPFILE=exp_tablespace_migration.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = MIGRATION

If you want to perform a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter:

Syntax would be following but we are not doing that as we already checked that everythingis self contained:
expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y




Output is as:


oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Dec 3 20:08:12 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> CREATE OR REPLACE DIRECTORY dpump_dir AS '/ora/fs0000/work/dpump_dir' ;

Directory created.

SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$ expdp system/system DUMPFILE=exp_tablespace_migration.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = MIGRATION

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 03 December, 2010 20:09:01

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** DUMPFILE=exp_tablespace_migration.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = MIGRATION
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/ora/fs0000/work/dpump_dir/exp_tablespace_migration.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 20:10:31


oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$ ls -lart
total 243
drwxr-xr-x 12 oracle dba 14 Dec 3 20:06 ..
drwxr-xr-x 2 oracle oinstall 4 Dec 3 20:09 .
-rw-r----- 1 oracle oinstall 118784 Dec 3 20:10 exp_tablespace_migration.dmp
-rw-r----- 1 oracle oinstall 1033 Dec 3 20:10 export.log

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$ cat export.log
;;;
Export: Release 10.2.0.4.0 - 64bit Production on Friday, 03 December, 2010 20:09:01

Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** DUMPFILE=exp_tablespace_migration.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = MIGRATION
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/ora/fs0000/work/dpump_dir/exp_tablespace_migration.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 20:10:31

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$





Execute the following command to see which platform are we on source and target and here it is:

On source we have:

set lien 300
SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;




Output is as:


SQL> SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

2 3
PLATFORM_ID SUBSTR(D.PLATFORM_NAME,2,30) ENDIAN_FORMAT
----------- ------------------------------------------------------------------------------------------------------------------------ --------------
2 olaris[tm] OE (64-bit) Big

SQL> SQL>




On target we have:

SQL> set line 300
SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
SQL> 2 3
PLATFORM_ID SUBSTR(D.PLATFORM_NAME,2,30) ENDIAN_FORMAT
----------- ------------------------------------------------------------------------------------------------------------------------ --------------
13 inux x86 64-bit Little

SQL>


So we are going from big to little.




================================================================================================================

Now we will transfer files using DBMS_FILE_TRANSFER

DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object IN VARCHAR2,
source_file_name IN VARCHAR2,
destination_directory_object IN VARCHAR2,
destination_file_name IN VARCHAR2,
destination_database IN VARCHAR2);


1) Create a directory at target database host, and give permissions to
local user.
This is the directory object into which the file is placed at the
destination site, it must exist in the remote file system.

CREATE OR REPLACE DIRECTORY target_dir AS '+DEST_ORADATA1_DG' ;
GRANT WRITE ON DIRECTORY target_dir TO system;





Output is as:

oracle : rac1 : @DFTGDV1 : /home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Dec 3 20:29:55 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> CREATE OR REPLACE DIRECTORY target_dir AS '+DEST_ORADATA1_DG' ;

Directory created.

SQL> GRANT WRITE ON DIRECTORY target_dir TO system;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

oracle : rac1 : @DFTGDV1 : /home/oracle
$



Now,

2) Create a directory at source database host. The directory object from which
the file is copied at the local source site. This directory object must
exist at the source site.

CREATE OR REPLACE DIRECTORY source_dir AS '+ORADATA1_DG/dfsrdv' ;
GRANT READ,WRITE ON DIRECTORY source_dir TO system;
CREATE OR REPLACE DIRECTORY source_dir_1 AS 'ORADATA1_DG/dfsrdv/datafile' ;
GRANT READ,WRITE ON DIRECTORY source_dir TO system;




Output is as:

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Dec 3 20:34:12 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> CREATE OR REPLACE DIRECTORY source_dir AS '+ORADATA1_DG/dfsrdv' ;

Directory created.

SQL> GRANT READ,WRITE ON DIRECTORY source_dir TO system;

Grant succeeded.

SQL> CREATE OR REPLACE DIRECTORY source_dir_1 AS '+ORADATA1_DG/dfsrdv/datafile' ;

Directory created.

SQL> GRANT READ,WRITE ON DIRECTORY source_dir TO system;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$





Now,

Create a dblink to connect to target database host:


CREATE DATABASE LINK DFTGDV CONNECT TO system IDENTIFIED BY system USING 'DFTGDV';

Output is as:

$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Dec 3 20:36:38 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> conn system/system
Connected.
SQL> CREATE DATABASE LINK DFTGDV CONNECT TO system IDENTIFIED BY system USING 'DFTGDV';

Database link created.

SQL>

Test the dblink as:
select * from user_users@DFTGDV


Here is the output:

SQL> select * from user_users@DFTGDV
2 ;

USERNAME USER_ID ACCOUNT_STATUS
------------------------------ ---------- --------------------------------
LOCK_DATE EXPIRY_DATE DEFAULT_TABLESPACE
------------------ ------------------ ------------------------------
TEMPORARY_TABLESPACE CREATED INITIAL_RSRC_CONSUMER_GROUP
------------------------------ ------------------ ------------------------------
EXTERNAL_NAME
--------------------------------------------------------------------------------
SYSTEM 5 OPEN
SYSTEM
TEMP 12-MAR-08 SYS_GROUP



SQL>





Now,

4) Connect to source instance:

DFSRDV => Connect string to source database
DFTGDV => dblink to target database
a1.dat => migration.309.736802539
a4.dat => migrated_from_source.dbf

FILE_NAME TABLESPACE_NAME
------------------------------------------------------------------------------------- --------------------
+ORADATA1_DG/dfsrdv/datafile/migration.309.736802539 MIGRATION
+ORADATA1_DG/dfsrdv/datafile/migration.310.736802721 MIGRATION
+ORADATA1_DG/dfsrdv/datafile/migration.311.736802729 MIGRATION



CONNECT system/system@DFSRDV

BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_1' , 'migration.309.736802539' , 'target_dir' , 'migrated_from_source.dbf' , 'DFTGDV' ) ;
END ;


Due to unpublished bug 8412695, I am getting error ORA-19563 (at the time of writing this note oracle was still working
on this bug and there is was no solution to it), so I have to convert the file at source.


SQL> BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_1' , 'migration.309.736802539' , 'target_dir' , 'migrated_from_source.dbf' , 'DFTGDV' ) ;
END ; 2 3
4 /
BEGIN
*
ERROR at line 1:
ORA-19563: File transfer: data file conversion header validation failed for
file +DEST_ORADATA1_DG/migrated_from_source.dbf
ORA-02063: preceding line from DFTGDV
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 60
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 168
ORA-06512: at line 2


SQL>


So, I will convert the file at source as:


RMAN> convert tablespace migration to platform="Linux x86 64-bit" FORMAT '+DFDV_ORADATA1_DG';



Output is as:

oracle : goldengate : @DFSRDV : /home/oracle
$ rman

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 3 21:06:23 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

RMAN> connect target /

connected to target database: DFSRDV (DBID=1616533891)

RMAN> convert tablespace migration to platform="Linux x86 64-bit" FORMAT '+DFDV_ORADATA1_DG';

Starting backup at 03-DEC-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=33 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00006 name=+ORADATA1_DG/dfsrdv/datafile/migration.309.736802539
converted datafile=+ORADATA1_DG/dfsrdv/xtransport/migration.312.736808821
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00007 name=+ORADATA1_DG/dfsrdv/datafile/migration.310.736802721
converted datafile=+ORADATA1_DG/dfsrdv/xtransport/migration.313.736808829
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00008 name=+ORADATA1_DG/dfsrdv/datafile/migration.311.736802729
converted datafile=+ORADATA1_DG/dfsrdv/xtransport/migration.314.736808831
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished backup at 03-DEC-10

RMAN>



and files are here in ASM as:

ASMCMD> pwd
+ORADATA1_DG/DFSRDV/XTRANSPORT
ASMCMD> ls -lart
WARNING:option 'r' is deprecated for 'ls'
please use 'reverse'

Type Redund Striped Time Sys Name
XTRANSPORT UNPROT COARSE DEC 03 21:00:00 Y none => MIGRATION.312.736808821
XTRANSPORT UNPROT COARSE DEC 03 21:00:00 Y none => MIGRATION.313.736808829
XTRANSPORT UNPROT COARSE DEC 03 21:00:00 Y none => MIGRATION.314.736808831
ASMCMD>


Now, I will recreate the directory object to this dorectroy and then will move to target as:

CREATE OR REPLACE DIRECTORY source_dir_2 AS '+ORADATA1_DG/DFSRDV/XTRANSPORT' ;
GRANT READ,WRITE ON DIRECTORY source_dir_2 TO system;


Output is as:

oracle : goldengate : @DFSRDV : /home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Dec 3 21:10:28 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> CREATE OR REPLACE DIRECTORY source_dir_2 AS '+ORADATA1_DG/DFSRDV/XTRANSPORT' ;
GRANT READ,WRITE ON DIRECTORY source_dir_2 TO system;

Directory created.

SQL>
Grant succeeded.

SQL>


Now lets transfer files as:



File names to transfer are;

MIGRATION.312.736808821
MIGRATION.313.736808829
MIGRATION.314.736808831


On source system do following:

CONNECT system/system@DFSRDV

BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_2' , 'MIGRATION.312.736808821' , 'target_dir' , 'migrated_from_source.dbf' , 'DFTGDV' ) ;
END ;
/


BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_2' , 'MIGRATION.313.736808829' , 'target_dir' , 'migrated_from_source_1.dbf' , 'DFTGDV' ) ;
END ;
/


BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_2' , 'MIGRATION.314.736808831' , 'target_dir' , 'migrated_from_source_2.dbf' , 'DFTGDV' ) ;
END ;
/



Output is as:

SQL> CONNECT system/system@DFSRDV
Connected.
SQL> BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_2' , 'MIGRATION.312.736808821' , 'target_dir' , 'migrated_from_source.dbf' , 'DFTGDV' ) ;
END ; 2 3
4 /

PL/SQL procedure successfully completed.

SQL> BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_2' , 'MIGRATION.313.736808829' , 'target_dir' , 'migrated_from_source_1.dbf' , 'DFTGDV' ) ;
END ; 2 3
4 /

PL/SQL procedure successfully completed.

SQL> BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_2' , 'MIGRATION.314.736808831' , 'target_dir' , 'migrated_from_source_2.dbf' , 'DFTGDV' ) ;
END ;
/
2 3 4

PL/SQL procedure successfully completed.

SQL> SQL>






Now, lets find our transfered datafiles in target as:


Here they are:

ASMCMD> cd +DEST_ORADATA1_DG/DFTGDV/XTRANSPORT
ASMCMD> ls -lrt
WARNING:option 'r' is deprecated for 'ls'
please use 'reverse'

Type Redund Striped Time Sys Name
XTRANSPORT UNPROT COARSE DEC 03 21:00:00 Y FILE_TRANSFER.1552.736809213
XTRANSPORT UNPROT COARSE DEC 03 21:00:00 Y FILE_TRANSFER.1553.736809313
XTRANSPORT UNPROT COARSE DEC 03 21:00:00 Y FILE_TRANSFER.1554.736809333
ASMCMD>



Now, lets plug these files in and here it is:


Now lets create new directory object in target to place the expdp dumpfile thatwe had extracted from source as:

CREATE OR REPLACE DIRECTORY dpump_dir AS '/ora/fs0000/work/oraexport/DFTGDV' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

Output is as:

connect as sysdba and do the following:

SQL> CREATE OR REPLACE DIRECTORY dpump_dir AS '/ora/fs0000/work/oraexport/DFTGDV' ;

Directory created.

SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

Grant succeeded.

SQL>



Now, scp the file exp_tablespace_migration.dmp from source to target as:

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$ scp exp_tablespace_migration.dmp rac1:/ora/fs0000/work/oraexport/DFTGDV/.
oracle@rac1's password:
exp_tablespace_migra 100% ********************************************************************************************************** 116 KB 00:00

oracle : goldengate : @DFSRDV : /ora/fs0000/work/dpump_dir
$


So now on source we have that dump:

oracle : rac1 : @DFTGDV1 : /ora/fs0000/work/oraexport/DFTGDV
$ ls -lart
total 128
drwxrwxr-x 51 4001 dba 4096 Dec 3 21:47 ..
drwxr-xr-x 2 oracle oinstall 4096 Dec 3 21:53 .
-rw-r----- 1 oracle oinstall 118784 Dec 3 21:53 exp_tablespace_migration.dmp

oracle : rac1 : @DFTGDV1 : /ora/fs0000/work/oraexport/DFTGDV
$



Now, we need to create the use "MIGRATE" in the taget database to plugin the datafiles. You can also use the remap schema option to do that.

But I will create a user and will not use the remap schema option.

create user migrate identified by migrate default tablespace users temporary tablespace temp;
grant connect,resource,dba to migrate;

I am giving same grants to this user as we have in soruce.


Here is the output:


SQL> create user migrate identified by migrate default tablespace users temporary tablespace temp;

User created.

SQL> grant connect,resource,dba to migrate;

Grant succeeded.

SQL>


Now lets import:

impdp system/system DUMPFILE=exp_tablespace_migration.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1552.736809213','+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1553.736809313','+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1554.736809333'


Here is the output:

oracle : rac1 : @DFTGDV1 : /ora/fs0000/work/oraexport/DFTGDV
$ 213','+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1553.736809313','+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1554.736809333' < dumpfile="exp_tablespace_migration.dmp" directory="dpump_dir" transport_datafiles="+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1552.736809213,+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1553.736809313,+DEST_ORADATA1_DG/DFTGDV/XTRANSPORT/FILE_TRANSFER.1554.736809333"> ALTER TABLESPACE MIGRATION READ WRITE;

Tablespace altered.

SQL>



Now lets test if we have transfered the data as:

SQL> conn migrate/migrate
Connected.
SQL> select table_name from user_tables;

TABLE_NAME
------------------------------
TRANSFER
PLACES

SQL> select count(*) from TRANSFER;

COUNT(*)
----------
1589

SQL> select count(*) from PLACES;

COUNT(*)
----------
50602

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

oracle : rac1 : @DFTGDV1 : /ora/fs0000/work/oraexport/DFTGDV
$


and its a success !!!!!!!

Now, if you want you can the default tablespace of user migrate from users to migration as:


SQL> conn / as sysdba
Connected.
SQL> alter user migrate default tablespace migration;

User altered.

SQL>


Thats it !!!!!!!!!!!


You have thousands of eyes, and yet You have no eyes. You have thousands of forms, and yet You do not have even one. You have thousands of Lotus Feet, and yet You do not have even one foot. You have no nose, but you have thousands of noses. This Play of Yours entrances me. 2 Amongst all is the Light?You are that Light. By this
Illumination, that Light is radiant within all.

Monday, November 29, 2010

Post 23 of series,Configure more complex goldengate replication with Tutorial

You are the Primal Being, the Most Wonderful Creator. There is no other as Great as You. Age after age, You are the One.
Forever and ever, You are the One. You never change, O Creator Lord. Everything happens according to Your Will. You
Yourself accomplish all that occurs. You Yourself created the entire universe, and having fashioned it, You Yourself shall
destroy it all.




Index of all the posts of Gurpartap Singh's Blog


Lets create more data for a little bit more complex replication.

In source in schema meditate create the following table.

create table place_at_time_of_death (
place_id number,
supercluster varchar2(30),
galaxy varchar2(30)
);



ALTER TABLE place_at_time_of_death
add CONSTRAINT place_id_pk PRIMARY KEY (place_id);



create sequence place_id_seq
increment by 1
start with 1
nomaxvalue
cache 20
;


alter table place_at_time_of_death add HUMAN_ID NUMBER;
alter table place_at_time_of_death add FIRSTNAME VARCHAR2(30);
alter table place_at_time_of_death add LASTNAME VARCHAR2(20);



Create this table on target in schema CONSCIOUSNESS :

create table location_at_time_of_death (
record_seq_no number,
place_id number,
supercluster varchar2(30),
galaxy varchar2(30)
);


ALTER TABLE location_at_time_of_death
add CONSTRAINT place_id_pk PRIMARY KEY (place_id);


create sequence location_at_time_of_death_seq
INCREMENT BY 1
START WITH 1
NOMAXVALUE
CACHE 20
;



alter table location_at_time_of_death add PERSON_ID NUMBER;
alter table location_at_time_of_death add INITIALNAME VARCHAR2(30);
alter table location_at_time_of_death add ENDNAME VARCHAR2(20);
alter table location_at_time_of_death add life_resulted_in VARCHAR2(100);

Output is as:


Source:

SQL> show user
USER is "MEDITATE"
SQL> create table place_at_time_of_death (
place_id number,
supercluster varchar2(30),
galaxy varchar2(30)
);
2 3 4 5
Table created.

SQL>


SQL> ALTER TABLE place_at_time_of_death
add CONSTRAINT place_id_pk PRIMARY KEY (place_id); 2

Table altered.


SQL> create sequence place_id_seq
increment by 1
start with 1
nomaxvalue
cache 20
; 2 3 4 5 6

Sequence created.



SQL> show user
USER is "MEDITATE"
SQL> alter table place_at_time_of_death add HUMAN_ID NUMBER;
alter table place_at_time_of_death add FIRSTNAME VARCHAR2(30);
alter table place_at_time_of_death add LASTNAME VARCHAR2(20);


Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>

SQL>


Target:



SQL> create table location_at_time_of_death (
record_seq_no number,
place_id number,
supercluster varchar2(30),
galaxy varchar2(30)
);

2 3 4 5 6
Table created.

SQL> SQL>
ALTER TABLE location_at_time_of_death
add CONSTRAINT place_id_pk PRIMARY KEY (place_id);

SQL> 2
Table altered.

SQL>

SQL> SQL> create sequence location_at_time_of_death_seq
INCREMENT BY 1
START WITH 1
NOMAXVALUE
CACHE 20; 2 3 4 5

Sequence created.

SQL>


SQL> alter table location_at_time_of_death add PERSON_ID NUMBER;
alter table location_at_time_of_death add INITIALNAME VARCHAR2(30);
alter table location_at_time_of_death add ENDNAME VARCHAR2(20);

Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
SQL> alter table location_at_time_of_death add life_resulted_in VARCHAR2(100);

Table altered.

SQL>





Now, configure goldengate with ddl replication as:

On source do the following:

Enable supplemental logging at table level.
DBLOGIN USERID meditate, PASSWORD meditate
ADD TRANDATA meditate.place_at_time_of_death



Output as:

GGSCI (rac1.rac.meditate.com) 15> DBLOGIN USERID meditate, PASSWORD meditate
Successfully logged into database.

GGSCI (rac1.rac.meditate.com) 16> ADD TRANDATA meditate.place_at_time_of_death

Logging of supplemental redo data enabled for table MEDITATE.PLACE_AT_TIME_OF_DEATH.

GGSCI (rac1.rac.meditate.com) 17>



In LOGS:

2010-11-18 21:48:31 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD TRANDATA meditate.place_at_time_of_death.





Add extract as:
eplace
ADD EXTRACT eplace, TRANLOG, BEGIN NOW, threads 3

ADD EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl, EXTRACT eplace


EDIT PARAMS eplace


EXTRACT eplace
USERID goldengate@simarsv1, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
TRANLOGOPTIONS ASMUSER sys@RAC1ASM, ASMPASSWORD AACAAAAAAAAAAAJASAOBIHRGNFGAYCKBHJPAGIXFGGBJUAXI, ENCRYPTKEY default
EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl
-- DDL INCLUDE MAPPED
TABLE meditate.place_at_time_of_death;

Please note I am commenting line DDL INCLUDE MAPPED as it doesn't work when we use SOURCEDEFS parameter in replicat
i.e. when definition is different in source and target.


pplace
ADD EXTRACT pplace, EXTTRAILSOURCE /u01/app/oracle/product/goldengate/dirdat/pl

ADD RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl, EXTRACT pplace

EDIT PARAMS pplace

EXTRACT pplace
RMTHOST goldengate, MGRPORT 7909
RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl
PASSTHRU
TABLE meditate.place_at_time_of_death;




Output is as:

GGSCI (rac1.rac.meditate.com) 15> DBLOGIN USERID meditate, PASSWORD meditate
Successfully logged into database.

GGSCI (rac1.rac.meditate.com) 16> ADD TRANDATA meditate.place_at_time_of_death

Logging of supplemental redo data enabled for table MEDITATE.PLACE_AT_TIME_OF_DEATH.

GGSCI (rac1.rac.meditate.com) 17> ADD EXTRACT eplace, TRANLOG, BEGIN NOW, threads 3
EXTRACT added.


GGSCI (rac1.rac.meditate.com) 18> ADD EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl, EXTRACT eplace
EXTTRAIL added.


GGSCI (rac1.rac.meditate.com) 19> EDIT PARAMS eplace

EXTRACT eplace
USERID goldengate@simarsv1, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
TRANLOGOPTIONS ASMUSER sys@RAC1ASM, ASMPASSWORD AACAAAAAAAAAAAJASAOBIHRGNFGAYCKBHJPAGIXFGGBJUAXI, ENCRYPTKEY default
EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl
--DDL INCLUDE MAPPED
TABLE meditate.place_at_time_of_death;


GGSCI (rac1.rac.meditate.com) 20> ADD EXTRACT pplace, EXTTRAILSOURCE /u01/app/oracle/product/goldengate/dirdat/pl
EXTRACT added.


GGSCI (rac1.rac.meditate.com) 21> ADD RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl, EXTRACT pplace
RMTTRAIL added.


GGSCI (rac1.rac.meditate.com) 22> EDIT PARAMS pplace

EXTRACT pplace
RMTHOST goldengate, MGRPORT 7909
RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl
PASSTHRU
TABLE meditate.place_at_time_of_death;




GGSCI (rac1.rac.meditate.com) 23>
oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$






In logs it looks like:

2010-11-18 21:48:31 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD TRANDATA meditate.place_at_time_of_death.
2010-11-18 21:50:02 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD EXTRACT eplace TRANLOG, BEGIN NOW, threads 3.
2010-11-18 21:50:13 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl EXTRACT eplace.
2010-11-18 21:50:26 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): EDIT PARAMS eplace.
2010-11-18 21:51:16 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD EXTRACT pplace EXTTRAILSOURCE /u01/app/oracle/product/goldengate/dirdat/pl.
2010-11-18 21:51:26 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD RMTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl EXTRACT pplace.
2010-11-18 21:51:38 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): EDIT PARAMS pplace.

============================================================

Create definition file here and scp to the target side.

I will create a file defeplace.prm under directroy $GOLDEN_GATE_INSALL/dirprm/defeplace.prm on source as:

cd /u01/app/oracle/product/goldengate/dirprm
vi defeplace.prm

DEFSFILE /u01/app/oracle/product/goldengate/dirdat/defeplace
USERID goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
TABLE meditate.place_at_time_of_death;

and then execute:

cd /u01/app/oracle/product/goldengate
./defgen paramfile dirprm/defeplace.prm


then scp this file to source as:
scp /u01/app/oracle/product/goldengate/dirdat/defeplace goldengate:/u01/app/oracle/product/goldengate/dirdat/defeplace

--------------------------------------------------------------

Output is as:

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate/dirdat
$ cd /u01/app/oracle/product/goldengate

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ ./defgen paramfile dirprm/defeplace.prm

***********************************************************************
Oracle GoldenGate Table Definition Generator for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 18 2009 00:09:13

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.


Starting at 2010-11-25 02:52:07
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Thu Sep 3 03:28:30 EDT 2009, Release 2.6.18-164.el5
Node: rac1.rac.meditate.com
Machine: x86_64
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited

Process id: 21136

***********************************************************************
** Running with the following parameters **
***********************************************************************
DEFSFILE /u01/app/oracle/product/goldengate/dirdat/defeplace
USERID goldengate, PASSWORD ************************************************* ENCRYPTKEY default
TABLE meditate.place_at_time_of_death;
Retrieving definition for MEDITATE.PLACE_AT_TIME_OF_DEATH



Definitions generated for 1 tables in /u01/app/oracle/product/goldengate/dirdat/defeplace


oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ scp /u01/app/oracle/product/goldengate/dirdat/defeplace goldengate:/u01/app/oracle/product/goldengate/dirdat/defeplace
oracle@goldengate's password:
defeplace 100% 1162 1.1KB/s 00:00

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$
===========================================================

We will be using the following goldengate parameters in the parameter file.


Using EXCEPTIONSONLY
Use EXCEPTIONSONLY in an exceptions MAP statement intended for error handling. It causes
the MAP statement in which it is specified to be executed only if an error occurs for the last
record processed in the preceding MAP statement.
To use EXCEPTIONSONLY, set the REPERROR parameter to EXCEPTION for the error to be handled
as an exception. Make certain that the exceptions MAP statement specifies the same SOURCE
table as in the MAP statement for which the error is anticipated. Set the TARGET table to the
name of the exceptions table. For more information about REPERROR, see page 259.
The exceptions MAP statement must follow the MAP statement for which the error is
anticipated. For more information about using an exceptions MAP statement, see the
GoldenGate for Windows and UNIX Administrator Guide.
Syntax MAP , TARGET
, EXCEPTIONSONLY



REPERROR
Valid for Replicat
Use the REPERROR parameter to control how Replicat responds to errors. You can use one
REPERROR statement to handle most errors in a default manner, while using one or more
other REPERROR statements to handle specific errors differently. For example, you can ignore
duplicate-record errors but abort processing in all other cases.
In the syntax shown, note that the , specification must be within
parentheses.

This parameter has many options but I will discuss following 2 here only:

DISCARD Log the error to the discard file but continue processing the
transaction and subsequent transactions.

EXCEPTION Handle the error as an exception. In anticipation of possible errors,
you can create an exceptions MAP statement that executes only after
an error. Use that MAP statement, for example, to map columns
from a failed update statement into a “missing update” table. In the
parameter file, specify the exceptions MAP sta



DISCARDFILE
Valid for Extract and Replicat
Use the DISCARDFILE parameter to generate a discard file to which GoldenGate can log
records that it cannot process. Records can be discarded for several reasons. For example,
a record is discarded if the underlying table structure changed since the record was written
to the trail. You can use the discard file to help you identify the cause of processing errors.
Each entry in the discard file contains the discarded record buffer and an error code
indicating the reason. GoldenGate creates the specified discard file in the dirrpt subdirectory
of the GoldenGate installation directory. You can view it with a text editor or by
using the following command in GGSCI.
VIEW REPORT
Where: is the fully qualified name of the discard file.
To prevent having to perform manual maintenance of discard files, use either the PURGE or
APPEND option. Otherwise, you must specify a different discard file name before starting
each process run, because GoldenGate will not write to an existing discard file.



DISCARDROLLOVER
Valid for Extract and Replicat
Use the DISCARDROLLOVER parameter to set a schedule for aging discard files. For long or
continuous runs, setting an aging schedule prevents the discard file from filling up and
causing the process to abend, and it provides a predictable set of archives that can be
included in your archiving routine.
When the DISCARDROLLOVER age point is reached, a new discard file is created, and old files
are renamed in the format of ., where:
? is the name of the Extract or Replicat group
? is a number that gets incremented by one each time a new file is created, for
example: myext0.dsc, myext1.dsc, myext2.dsc, and so forth.
You can specify a time of day, a day of the week, or both. Specifying just a time of day (AT
option) without a day of the week (ON option) generates a discard file at the specified time
every day.



ALLOWDUPTARGETMAP | NOALLOWDUPTARGETMAP
Valid for Replicat
Use the ALLOWDUPTARGETMAP and NOALLOWDUPTARGETMAP parameters to control whether or
not duplicate MAP statements for the same source and target objects are accepted in a
parameter file.



Add the replicat like following on the targer server, please note here we are replicating just the inserts here
we will modify the parameter file to replicate deletes and updates later in the tutorial:

ADD REPLICAT rplace, EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl


EDIT PARAMS rplace

REPLICAT rplace
SOURCEDEFS /u01/app/oracle/product/goldengate/dirdat/defeplace
DISCARDFILE /u01/app/oracle/product/goldengate/dirdat/location_at_time_of_death_err, purge
DISCARDROLLOVER ON sunday
USERID goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
REPERROR (DEFAULT, DISCARD)
GETINSERTS
IGNOREDELETES
IGNOREUPDATES
MAP meditate.place_at_time_of_death, TARGET CONSCIOUSNESS.location_at_time_of_death, &
sqlexec (id record_seq_lookup, query "select CONSCIOUSNESS.location_at_time_of_death_seq.NEXTVAL record_seq from dual", noparams), &
sqlexec (id life_resulted_in_lookup, query "select deed_result from CONSCIOUSNESS.RESULTS &
where deed_id = (select DEED_ID_AT_DEATH from CONSCIOUSNESS.TENURE where &
PERSON_ID=(:humanid_topass) and INITIALNAME=:fist and ENDNAME=:lastname_topass)", &
params(humanid_topass=HUMAN_ID,fist=FIRSTNAME,lastname_topass=LASTNAME), ERROR RAISE, TRACE ALL), &
COLMAP (RECORD_SEQ_NO=@getval (record_seq_lookup.record_seq), &
PLACE_ID=PLACE_ID, &
SUPERCLUSTER=SUPERCLUSTER, &
GALAXY=GALAXY, &
PERSON_ID=HUMAN_ID, &
INITIALNAME=FIRSTNAME, &
ENDNAME=LASTNAME, &
LIFE_RESULTED_IN=@getval (life_resulted_in_lookup.deed_result)), &
KEYCOLS (PLACE_ID) ;



Explanation for few parameters that we are using here:
In following block:

"sqlexec (id record_seq_lookup, query "select CONSCIOUSNESS.location_at_time_of_death_seq.NEXTVAL record_seq from dual", noparams), &"

"id" declares the alias
"record_seq_lookup" is alias name of the block and can be anything.
"query" is the query that will be executed
"record_seq" is the alias for "CONSCIOUSNESS.location_at_time_of_death_seq.NEXTVAL" and can be anything
i.e whatever is returned will be stored in "record_seq" and when we want to retrive teh value in later part of teh parameter file
reference it as record_seq_lookup.record_seq
i.e block_alias_name.alias_name
"noparams" means we are not passing any parameters to this sql being executed.


and in the explanation for teh following block is :

" sqlexec (id life_resulted_in_lookup, query "select deed_result from CONSCIOUSNESS.RESULTS &
where deed_id = (select DEED_ID_AT_DEATH from CONSCIOUSNESS.TENURE where &
PERSON_ID=(:humanid_topass) and INITIALNAME=:fist and ENDNAME=:lastname_topass)", &
params(humanid_topass=HUMAN_ID,fist=FIRSTNAME,lastname_topass=LASTNAME), ERROR RAISE, TRACE ALL), & "

Only thing here is we are passing parameters to the sql and they are:
:humanid_topass
:fist
:lastname_topass

the values for them are
humanid_topass=HUMAN_ID
fist=FIRSTNAME
lastname_topass=LASTNAME






Now lets start the replicat as:

start replicat rplace


Output is as:

GGSCI (goldengate) 14> ADD REPLICAT rplace, EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl
REPLICAT added.


GGSCI (goldengate) 15> EDIT PARAMS rplace



Output in logs is as:

2010-11-25 02:23:54 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): ADD REPLICAT rplace EXTTRAIL /u01/app/oracle/product/goldengate/dirdat/pl.
2010-11-25 02:24:19 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): EDIT PARAMS rplace.
2010-11-25 02:26:00 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): start replicat rplace.
2010-11-25 02:26:00 GGS INFO 301 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 192.168.1.169 (START REPLICAT RPLACE ).
2010-11-25 02:26:00 GGS INFO 302 Oracle GoldenGate Manager for Oracle, mgr.prm: REPLICAT RPLACE starting.
2010-11-25 02:26:00 GGS INFO 320 Oracle GoldenGate Delivery for Oracle, rplace.prm: REPLICAT RPLACE starting.
2010-11-25 02:26:01 GGS INFO 320 Oracle GoldenGate Delivery for Oracle, rplace.prm: REPLICAT RPLACE started.




Now lets start the extract on the source side as:

start extract EPLACE
start extract PPLACE


Output is as:

GGSCI (rac1.rac.meditate.com) 5> start extract EPLACE

Sending START request to MANAGER ...
EXTRACT EPLACE starting


GGSCI (rac1.rac.meditate.com) 6> start extract PPLACE

Sending START request to MANAGER ...
EXTRACT PPLACE starting


GGSCI (rac1.rac.meditate.com) 7>




In source, output in logs is as:

2010-11-25 03:12:30 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): start extract EPLACE.
2010-11-25 03:12:30 GGS INFO 301 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 192.168.1.176 (START EXTRACT EPLACE ).
2010-11-25 03:12:30 GGS INFO 302 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT EPLACE starting.
2010-11-25 03:12:30 GGS INFO 310 Oracle GoldenGate Capture for Oracle, eplace.prm: EXTRACT EPLACE starting.
2010-11-25 03:15:01 GGS INFO Z0-016 Oracle GoldenGate Capture for Oracle, eplace.prm: Default thread stack size: 33554432.
2010-11-25 03:15:21 GGS INFO 310 Oracle GoldenGate Capture for Oracle, eplace.prm: EXTRACT EPLACE started.
2010-11-25 03:15:21 GGS INFO 112 Oracle GoldenGate Capture for Oracle, eplace.prm: No recovery is required for target file /u01/app/oracle/product/goldengate/dirdat/pl000000, at RBA 0 (file not opened).
2010-11-25 03:15:21 GGS INFO Z0-05M Oracle GoldenGate Capture for Oracle, eplace.prm: Output file /u01/app/oracle/product/goldengate/dirdat/pl is using format RELEASE 10.4.
2010-11-25 03:15:36 GGS INFO 399 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (oracle): start extract PPLACE.
2010-11-25 03:15:37 GGS INFO 301 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 192.168.1.176 (START EXTRACT PPLACE ).
2010-11-25 03:15:37 GGS INFO 302 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT PPLACE starting.
2010-11-25 03:15:37 GGS INFO 310 Oracle GoldenGate Capture for Oracle, pplace.prm: EXTRACT PPLACE starting.
2010-11-25 03:15:37 GGS INFO 310 Oracle GoldenGate Capture for Oracle, pplace.prm: EXTRACT PPLACE started.
2010-11-25 03:15:43 GGS INFO 406 Oracle GoldenGate Capture for Oracle, pplace.prm: Socket buffer size set to 27985 (flush size 27985).
2010-11-25 03:15:57 GGS INFO 112 Oracle GoldenGate Capture for Oracle, pplace.prm: No recovery is required for target file /u01/app/oracle/product/goldengate/dirdat/pl000000, at RBA 0 (file not opened).
2010-11-25 03:15:57 GGS INFO Z0-05M Oracle GoldenGate Capture for Oracle, pplace.prm: Output file /u01/app/oracle/product/goldengate/dirdat/pl is using format RELEASE 10.4.



On target we have following in logs:

2010-11-25 02:43:40 GGS INFO 301 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from EXTRACT on host 192.168.1.176 (START SERVER CPU -1 PRI -1 PARAMS ).
2010-11-25 02:43:40 GGS INFO 302 Oracle GoldenGate Manager for Oracle, mgr.prm: Manager started collector process (Port 7848).
2010-11-25 02:43:40 GGS INFO 373 Oracle GoldenGate Collector, port 7848: Waiting for connection (started dynamically).
2010-11-25 02:43:59 GGS INFO 373 Oracle GoldenGate Collector: Connected to 192.168.1.176:61301.
2010-11-25 02:43:59 GGS INFO 370 Oracle GoldenGate Collector, port 7848: Opening /u01/app/oracle/product/goldengate/dirdat/pl000000 (byte -1, current EOF 0).






Now we will test the replication:

On source side we have:

SQL> desc place_AT_TIME_OF_DEATH
Name Null? Type
----------------------------------------- -------- ----------------------------
PLACE_ID NOT NULL NUMBER
SUPERCLUSTER VARCHAR2(30)
GALAXY VARCHAR2(30)
HUMAN_ID NUMBER
FIRSTNAME VARCHAR2(30)
LASTNAME VARCHAR2(20)

SQL> insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'COMA',3,'AB','CD');

1 row created.

SQL> insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'COMA',3,'MN','OP');

1 row created.

SQL> insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'COMA',3,'QR','ST');

1 row created.

SQL> insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'COMA',2,'IJ','KL');

1 row created.

SQL> commit;

Commit complete.

SQL>


On target side we got:

Before insert on source:

SQL> select * from LOCATION_AT_TIME_OF_DEATH;

no rows selected



After insert on source its replicated to target as:


SQL> SQL> set line 300
SQL> l
1* select * from LOCATION_AT_TIME_OF_DEATH
SQL> /

RECORD_SEQ_NO PLACE_ID SUPERCLUSTER GALAXY PERSON_ID INITIALNAME ENDNAME LIFE_RESULTED_IN
------------- ---------- ------------------------------ ------------------------------ ---------- ------------------------------ -------------------- ----------------------------------------------------------------------------------------------------
2 7 COMA 3 AB CD Becomes snake in next incarnations
3 8 COMA 3 MN OP Becomes prostitute in next incarnations
4 9 COMA 3 QR ST Liberated
5 10 COMA 2 IJ KL Becomes pig in next incarnations

4 rows selected.

SQL>


Now, I will change the DDL of the table on the source and target databases to have more data recorded in this table to do little bit more
complex replication.
Before changing the DDL, I will stop extract and replicat processes for this table on both source and target.

On source
stop extract eplace
stop extract pplace

Output on source is as:

GGSCI (rac1.rac.meditate.com) 42> stop extract EPLACE

Sending STOP request to EXTRACT EPLACE ...
Request processed.


GGSCI (rac1.rac.meditate.com) 43> stop extract PPLACE

Sending STOP request to EXTRACT PPLACE ...
Request processed.


GGSCI (rac1.rac.meditate.com) 44>



On target
stop replicat RPLACE

Output on target is as:

GGSCI (goldengate) 101> stop replicat RPLACE

Sending STOP request to REPLICAT RPLACE ...
Request processed.


GGSCI (goldengate) 102>



Now on source do the following under schmea meditate:

desc place_at_time_of_death
alter table place_at_time_of_death add star varchar2(30);
alter table place_at_time_of_death add planet varchar2(30);
alter table place_at_time_of_death add race varchar2(30);
alter table place_at_time_of_death add continent varchar2(30);
alter table place_at_time_of_death add province varchar2(30);
alter table place_at_time_of_death add city varchar2(30);
desc place_at_time_of_death


Output on source is as:

SQL> desc place_at_time_of_death
Name Null? Type
----------------------------------------- -------- ----------------------------
PLACE_ID NOT NULL NUMBER
SUPERCLUSTER VARCHAR2(30)
GALAXY VARCHAR2(30)
HUMAN_ID NUMBER
FIRSTNAME VARCHAR2(30)
LASTNAME VARCHAR2(20)

SQL> alter table place_at_time_of_death add star varchar2(30);


Table altered.

SQL> SQL> alter table place_at_time_of_death add planet varchar2(30);

Table altered.

SQL> alter table place_at_time_of_death add race varchar2(30);

Table altered.

SQL> alter table place_at_time_of_death add continent varchar2(30);

Table altered.

SQL> alter table place_at_time_of_death add province varchar2(30);

Table altered.

SQL> alter table place_at_time_of_death add city varchar2(30);

Table altered.

SQL> desc place_at_time_of_death
Name Null? Type
----------------------------------------- -------- ----------------------------
PLACE_ID NOT NULL NUMBER
SUPERCLUSTER VARCHAR2(30)
GALAXY VARCHAR2(30)
HUMAN_ID NUMBER
FIRSTNAME VARCHAR2(30)
LASTNAME VARCHAR2(20)
STAR VARCHAR2(30)
PLANET VARCHAR2(30)
RACE VARCHAR2(30)
CONTINENT VARCHAR2(30)
PROVINCE VARCHAR2(30)
CITY VARCHAR2(30)

SQL>



and now do the following on target under CONSCIOUSNESS:

desc location_at_time_of_death
alter table location_at_time_of_death add star varchar2(30);
alter table location_at_time_of_death add planet varchar2(30);
alter table location_at_time_of_death add race varchar2(30);
alter table location_at_time_of_death add continent varchar2(30);
alter table location_at_time_of_death add province varchar2(30);
alter table location_at_time_of_death add city varchar2(30);
desc location_at_time_of_death


Output on target is as:
SQL> show user
USER is "CONSCIOUSNESS"
SQL> desc LOCATION_AT_TIME_OF_DEATH
Name Null? Type
----------------------------- -------- --------------------
RECORD_SEQ_NO NUMBER
PLACE_ID NOT NULL NUMBER
SUPERCLUSTER VARCHAR2(30)
GALAXY VARCHAR2(30)
PERSON_ID NUMBER
INITIALNAME VARCHAR2(30)
ENDNAME VARCHAR2(20)
LIFE_RESULTED_IN VARCHAR2(100)

SQL> alter table location_at_time_of_death add star varchar2(30);

Table altered.

SQL> alter table location_at_time_of_death add planet varchar2(30);

Table altered.

SQL> alter table location_at_time_of_death add race varchar2(30);

Table altered.

SQL> alter table location_at_time_of_death add continent varchar2(30);

Table altered.

SQL> alter table location_at_time_of_death add province varchar2(30);

Table altered.

SQL> alter table location_at_time_of_death add city varchar2(30);

Table altered.

SQL> desc location_at_time_of_death
Name Null? Type
----------------------------- -------- --------------------
RECORD_SEQ_NO NUMBER
PLACE_ID NOT NULL NUMBER
SUPERCLUSTER VARCHAR2(30)
GALAXY VARCHAR2(30)
PERSON_ID NUMBER
INITIALNAME VARCHAR2(30)
ENDNAME VARCHAR2(20)
LIFE_RESULTED_IN VARCHAR2(100)
STAR VARCHAR2(30)
PLANET VARCHAR2(30)
RACE VARCHAR2(30)
CONTINENT VARCHAR2(30)
PROVINCE VARCHAR2(30)
CITY VARCHAR2(30)

SQL>



Now we will need to create the definition file on source again and scp to target as:

cd /u01/app/oracle/product/goldengate/dirprm
vi defeplace.prm

DEFSFILE /u01/app/oracle/product/goldengate/dirdat/defeplace
USERID goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
TABLE meditate.place_at_time_of_death;

and then execute:

mv /u01/app/oracle/product/goldengate/dirdat/defeplace /u01/app/oracle/product/goldengate/dirdat/defeplace.case1
cd /u01/app/oracle/product/goldengate
./defgen paramfile dirprm/defeplace.prm


then scp this file to source as:
scp /u01/app/oracle/product/goldengate/dirdat/defeplace goldengate:/u01/app/oracle/product/goldengate/dirdat/defeplace


Output is as:
oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ mv /u01/app/oracle/product/goldengate/dirdat/defeplace /u01/app/oracle/product/goldengate/dirdat/defeplace.case1

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ cd /u01/app/oracle/product/goldengate

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ ./defgen paramfile dirprm/defeplace.prm

***********************************************************************
Oracle GoldenGate Table Definition Generator for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 18 2009 00:09:13

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.


Starting at 2010-11-26 00:34:43
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Thu Sep 3 03:28:30 EDT 2009, Release 2.6.18-164.el5
Node: rac1.rac.meditate.com
Machine: x86_64
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited

Process id: 7598

***********************************************************************
** Running with the following parameters **
***********************************************************************
DEFSFILE /u01/app/oracle/product/goldengate/dirdat/defeplace
USERID goldengate, PASSWORD ************************************************* ENCRYPTKEY default
TABLE meditate.place_at_time_of_death;
Retrieving definition for MEDITATE.PLACE_AT_TIME_OF_DEATH



Definitions generated for 1 tables in /u01/app/oracle/product/goldengate/dirdat/defeplace


oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$ scp /u01/app/oracle/product/goldengate/dirdat/defeplace goldengate:/u01/app/oracle/product/goldengate/dirdat/defeplace
oracle@goldengate's password:
defeplace 100% 1667 1.6KB/s 00:00

oracle : rac1.rac.meditate.com : @simar1 : /u01/app/oracle/product/goldengate
$



Now lets fix our golden gate parameter file for replicat rplace on source so that we can still replicate the inserts:

EDIT PARAMS rplace

REPLICAT rplace
SOURCEDEFS /u01/app/oracle/product/goldengate/dirdat/defeplace
DISCARDFILE /u01/app/oracle/product/goldengate/dirdat/location_at_time_of_death_err, purge
DISCARDROLLOVER ON sunday
USERID goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
REPERROR (DEFAULT, DISCARD)
GETINSERTS
IGNOREDELETES
IGNOREUPDATES
MAP meditate.place_at_time_of_death, TARGET CONSCIOUSNESS.location_at_time_of_death, &
sqlexec (id record_seq_lookup, query "select CONSCIOUSNESS.location_at_time_of_death_seq.NEXTVAL record_seq from dual", noparams), &
sqlexec (id life_resulted_in_lookup, query "select deed_result from CONSCIOUSNESS.RESULTS &
where deed_id = (select DEED_ID_AT_DEATH from CONSCIOUSNESS.TENURE where &
PERSON_ID=(:humanid_topass) and INITIALNAME=:fist and ENDNAME=:lastname_topass)", &
params(humanid_topass=HUMAN_ID,fist=FIRSTNAME,lastname_topass=LASTNAME), ERROR RAISE, TRACE ALL), &
COLMAP (RECORD_SEQ_NO=@getval (record_seq_lookup.record_seq), &
PLACE_ID=PLACE_ID, &
SUPERCLUSTER=SUPERCLUSTER, &
GALAXY=GALAXY, &
PERSON_ID=HUMAN_ID, &
INITIALNAME=FIRSTNAME, &
ENDNAME=LASTNAME, &
LIFE_RESULTED_IN=@getval (life_resulted_in_lookup.deed_result), &
STAR=STAR, &
PLANET=PLANET, &
RACE=RACE, &
CONTINENT=CONTINENT, &
PROVINCE=PROVINCE, &
CITY=CITY), &
KEYCOLS (PLACE_ID) ;


Now lets start the extraction and replication processes on both source and target as:

On source:
start extract eplace
start extract pplace


Output is as:

GGSCI (rac1.rac.meditate.com) 52> start extract eplace

Sending START request to MANAGER ...
EXTRACT EPLACE starting


GGSCI (rac1.rac.meditate.com) 53> start extract pplace

Sending START request to MANAGER ...
EXTRACT PPLACE starting


GGSCI (rac1.rac.meditate.com) 54>



On target
start replicat rplace



Output is as:

GGSCI (goldengate) 102> start replicat rplace

Sending START request to MANAGER ...
REPLICAT RPLACE starting


GGSCI (goldengate) 103>



Now lets do some inserts to see if our replication is still working:


On source we have:
SQL> insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'SHAPLEY',1,'AB','CD');

1 row created.

SQL> commit;

Commit complete.

SQL>





On target we have:

1* select count(*) from location_at_time_of_death
SQL> /

COUNT(*)
----------
5

SQL>


Lets insert the other records as:

On source as:

insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'CENTAURUS',1,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'SCULPTOR',1,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'HYDA',1,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'PERSEUS-PISCES',1,'QR','ST');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'LEO',2,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'URSA MAJOR',2,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'PISCES-CETUS',2,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'PAVO INDUS',2,'QR','ST');

Output on source is as:
SQL> insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'CENTAURUS',1,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'SCULPTOR',1,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'HYDA',1,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'PERSEUS-PISCES',1,'QR','ST');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'LEO',2,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'URSA MAJOR',2,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'PISCES-CETUS',2,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'PAVO INDUS',2,'QR','ST');


1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL> SQL> commit ;

Commit complete.

SQL> select count(*) from place_at_time_of_death;

COUNT(*)
----------
13

SQL>



On target we have:
SQL> l
1* select count(*) from location_at_time_of_death
SQL> /

COUNT(*)
----------
13

SQL>



So, our replication is working.





On source do following:
select count(*) from life;
insert into life values (4,'AB','CD',SYSDATE - 23725,65,1);
insert into life values (4,'EF','GH',SYSDATE - 23725,65,1);
insert into life values (4,'IJ','KL',SYSDATE - 23725,65,2);
insert into life values (4,'MN','OP',SYSDATE - 23725,65,2);
insert into life values (4,'QR','ST',SYSDATE - 23725,65,3);
insert into life values (5,'AB','CD',SYSDATE - 23725,65,3);
insert into life values (5,'EF','GH',SYSDATE - 23725,65,4);
insert into life values (5,'IJ','KL',SYSDATE - 23725,65,4);
insert into life values (5,'MN','OP',SYSDATE - 23725,65,5);
insert into life values (5,'QR','ST',SYSDATE - 23725,65,5);
insert into life values (6,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (6,'EF','GH',SYSDATE - 23725,65,2);
insert into life values (6,'IJ','KL',SYSDATE - 23725,65,3);
insert into life values (6,'MN','OP',SYSDATE - 23725,65,3);
insert into life values (6,'QR','ST',SYSDATE - 23725,65,4);
insert into life values (7,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (7,'EF','GH',SYSDATE - 23725,65,2);
insert into life values (7,'IJ','KL',SYSDATE - 23725,65,2);
insert into life values (7,'MN','OP',SYSDATE - 23725,65,2);
insert into life values (7,'QR','ST',SYSDATE - 23725,65,2);
insert into life values (8,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (8,'EF','GH',SYSDATE - 23725,65,5);
insert into life values (8,'IJ','KL',SYSDATE - 23725,65,5);
insert into life values (8,'MN','OP',SYSDATE - 23725,65,5);
insert into life values (8,'QR','ST',SYSDATE - 23725,65,5);
select count(*) from life;


Output is as:

SQL> select count(*) from life;
insert into life values (4,'AB','CD',SYSDATE - 23725,65,1);
insert into life values (4,'EF','GH',SYSDATE - 23725,65,1);
insert into life values (4,'IJ','KL',SYSDATE - 23725,65,2);
insert into life values (4,'MN','OP',SYSDATE - 23725,65,2);
insert into life values (4,'QR','ST',SYSDATE - 23725,65,3);
insert into life values (5,'AB','CD',SYSDATE - 23725,65,3);
insert into life values (5,'EF','GH',SYSDATE - 23725,65,4);
insert into life values (5,'IJ','KL',SYSDATE - 23725,65,4);
insert into life values (5,'MN','OP',SYSDATE - 23725,65,5);
insert into life values (5,'QR','ST',SYSDATE - 23725,65,5);
insert into life values (6,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (6,'EF','GH',SYSDATE - 23725,65,2);
insert into life values (6,'IJ','KL',SYSDATE - 23725,65,3);
insert into life values (6,'MN','OP',SYSDATE - 23725,65,3);
insert into life values (6,'QR','ST',SYSDATE - 23725,65,4);
insert into life values (7,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (7,'EF','GH',SYSDATE - 23725,65,2);
insert into life values (7,'IJ','KL',SYSDATE - 23725,65,2);
insert into life values (7,'MN','OP',SYSDATE - 23725,65,2);
insert into life values (7,'QR','ST',SYSDATE - 23725,65,2);
insert into life values (8,'AB','CD',SYSDATE - 23725,65,2);

insert into life values (8,'EF','GH',SYSDATE - 23725,65,5);
insert into life values (8,'IJ','KL',SYSDATE - 23725,65,5);
insert into life values (8,'MN','OP',SYSDATE - 23725,65,5);
insert into life values (8,'QR','ST',SYSDATE - 23725,65,5);
select count(*) from life;
COUNT(*)
----------
15

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
COUNT(*)
----------
40

SQL> commit;

Commit complete.

SQL>




On target
select count(*) from tenure;

SQL> l
1* select count(*) from tenure
SQL> /

COUNT(*)
----------
40

SQL>


Lets do few more on source as:

insert into life values (9,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (9,'EF','GH',SYSDATE - 23725,65,5);
insert into life values (9,'IJ','KL',SYSDATE - 23725,65,5);
insert into life values (9,'MN','OP',SYSDATE - 23725,65,5);
insert into life values (9,'QR','ST',SYSDATE - 23725,65,5);

Output is:

SQL> insert into life values (9,'AB','CD',SYSDATE - 23725,65,2);
insert into life values (9,'EF','GH',SYSDATE - 23725,65,5);
insert into life values (9,'IJ','KL',SYSDATE - 23725,65,5);
insert into life values (9,'MN','OP',SYSDATE - 23725,65,5);
insert into life values (9,'QR','ST',SYSDATE - 23725,65,5);

1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL> commit;

Commit complete.

SQL>




and on target its:

SQL> select count(*) from tenure;

COUNT(*)
----------
45

SQL>






Now on source do the following:

insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'SEXTANS',3,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'COLUMBA',3,'IJ','KL');


insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'HOROLOGUM',4,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'CAPRICORNOUS',4,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'HERCULES',4,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'BOOTES',4,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,human_id,firstname,lastname) values (place_id_seq.nextval,'CORONA-BOREALIS',4,'QR','ST');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N4365',5,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','M49',5,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N4526',5,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N4535',5,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N4429',5,'QR','ST');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','M58',6,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','M59',6,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','M60',6,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N4762',6,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','M87',6,'QR','ST');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','M89',7,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N90',7,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,galaxy,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','N4654',7,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SOL',7,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','CHARA',7,'QR','ST');

insert into place_at_time_of_death (place_id,supercluster,galaxy,star,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SCORPII',7,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','37 GEMINORM',7,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','MERCURY',7,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','VENUS',7,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','MARS',7,'QR','ST');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','JUPITER',8,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','SATURAN',8,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','URANUS',8,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','NEPTUNE',8,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','PLUTO',8,'QR','ST');


insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,race,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','EARTH','BIRD',9,'AB','CD');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,race,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','EARTH','HUMAN',9,'EF','GH');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,race,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','EARTH','HUMAN',9,'IJ','KL');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,race,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','EARTH','HUMAN',9,'MN','OP');
insert into place_at_time_of_death (place_id,supercluster,galaxy,star,planet,race,human_id,firstname,lastname) values (place_id_seq.nextval,'VIRGO','MILKY WAY','SUN','EARTH','HUMAN',9,'QR','ST');


Output in source:
SQL> select count(*) from place_at_time_of_death;

COUNT(*)
----------
50

SQL>



In target the output is as:
SQL> select count(*) from location_at_time_of_death;

COUNT(*)
----------
50

SQL>



This human body has been given to you. This is your chance to meet the Lord of the Universe. Nothing else will work. Join the Saadh Sangat, the Company of the Holy; vibrate and meditate on the Jewel of the Naam. || 1|| Make every effort to cross over this terrifying world-ocean