Home » SQL & PL/SQL » SQL & PL/SQL » Add new columns and update to 600MB table online mode with zero downtime (Oracle 11gR2)
Add new columns and update to 600MB table online mode with zero downtime [message #674719] |
Tue, 12 February 2019 08:23 |
srinivas.k2005
Messages: 404 Registered: August 2006
|
Senior Member |
|
|
Hi Experts,
I have a table assume TEST which is accessed by live application all time(UPDATE/INSERT/DELETE), it has 10 million records in PROD currently(Avg row length - 51, Size - 600MB). I need to achieve below
1. Add two new NULLABLE columns
2. Update those new columns with values from another TEST_REF table if records match
3. Zero Downtime, no impact to live system
4. No records should be locked on the live table due to the update operation for the newly added columns
I thought I can use DBMS_REDEFINITION to achieve above, but I am stuck while I update values and I feel there should be simply another approach.
Below is my approach which does not satisfy my requirement completely, need some help.
Can you validate if my below approach is correct or is there any best way I can achieve?
Approach 1:
CREATE TABLE TEST ( ID NUMBER PRIMARY KEY);
Insert into TEST VALUES (1);
Insert into TEST VALUES (2);
Insert into TEST VALUES (3);
Insert into TEST VALUES (4);
Insert into TEST VALUES (5);
commit;
CREATE TABLE TEST_XREF ( ID NUMBER,Task VARCHAR2(10),cv VARCHAR2(10),desc1 varchar2(100));
Insert into TEST_XREF VALUES (1, 'Test','Vars','Test1');
Insert into TEST VALUES (2, 'Test','Vars','Test1');
Insert into TEST VALUES (3, 'Test','Vars','Test1');
Insert into TEST VALUES (4, 'Test','Vars','Test1');
Insert into TEST VALUES (5, 'Test','Vars','Test1');
commit;
EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE ('MY_SCHEMA','TEST',DBMS_REDEFINITION.CONS_USE_PK);
CREATE TABLE TEST_REORG AS SELECT * FROM TEST WHERE 1=2 ;
EXEC DBMS_REDEFINITION.START_REDEF_TABLE('MY_SCHEMA','TEST', 'TEST_REORG');
ALTER TABLE TEST_REORG ADD ( Task VARCHAR2(10),cv VARCHAR2(10));
UPDATE TEST_REORG BASED on TEST_XREF --This is not allowed, how to achieve this
DECLARE
N PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('MY_SCHEMA', 'TEST','TEST_REORG',
DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, N);
END;
/
EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE('MY_SCHEMA', 'TEST', 'TEST_REORG');
EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE ('MY_SCHEMA', 'TEST', 'TEST_REORG');
EXEC UTL_RECOMP.RECOMP_SERIAL('MY_SCHEMA') ;
DROP TABLE MY_SCHEMA.TEST_REORG;
Problems with approach 1:
1. If I am unable to update the records within redef, I have to do the update after redef which will again cause a lock on the table and does not give any meaning of using re-def
Approach 2:
1. Create a table replica table with 10 million records
2. Add new columns
3. Update the new columns with another table
4. Rename the table
Problems with approach 2:
1. Rename can take time and cause downtime
2. I need to again build logic to sync the delta
3. If I delay in delta sync my web application may face failures
Thanks,
SRK
|
|
|
|
|
|
Re: Add new columns and update to 600MB table online mode with zero downtime [message #674752 is a reply to message #674734] |
Thu, 14 February 2019 07:23 |
srinivas.k2005
Messages: 404 Registered: August 2006
|
Senior Member |
|
|
Hi John,
I have an extra step of updating the rows is causing issue, I have suggested below to Solution team and waiting for their reply. Sure will use as rows and not as records.
1. Create a exact replica backup table of TEST without data with all constraints exactly same as TEST table
2. Add new two columns
3. Write a direct insert to backup table and Swap table with DBMS_REDEFINITION , both should go together
INSERT INTO TEST_ORG
(Columns..)
SELECT Columns.., XREF columns
FROM TEST C, TEST_XREF H
WHERE C.ID = H.ID;
commit;
--Sync Delta data
EXEC DBMS_REDEFINITION.START_REDEF_TABLE('SCHEMA','TEST', 'TEST_ORG');
EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE('SCHEMA','TEST', 'TEST_ORG');
EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE ('SCHEMA', 'TEST', 'TEST_ORG');
Disadvantages:
1. Insert to Backup table can take time and we are forced to use Direct insert so that we dont loose any delta data and can impact redo log due to 9 million records
2. If DBMS_REDEFINITION takes time to initaite the process, we may have data loss and we have to create scripts to validate between the Swaped and original table and then drop the table
3. During the Start and Finish redefinition we may experience a certain time of lock on table which may cause application issues
Thanks,
SRK
|
|
|
|
Goto Forum:
Current Time: Mon Jun 17 00:39:46 CDT 2024
|