Showing posts with label Recovery. Show all posts
Showing posts with label Recovery. Show all posts

Monday, 25 August 2014

Recover Data from Damaged/Corrupted/Inconsistent/Offline/Truncated/Dropped Table While the Database is Offline

When it comes to catastrophic scenarios where the database has been corrupted and there is no available backup, flashback feature was not turned on, even when database cannot be open due to errors or inconsistent status, there is still a chance to recover and save the data as long as the datafiles are still exist using third party tools that can directly read from datafiles without the need to open the database.
In this post I'll discuss one of these tools, Oracle Database Unloader (ODU).

Download ODU:
==============
http://www.oracleodu.com/en/download

Full version: It restores all data only if you got the license file from the support [Not Free].
Trial version: It doesn't need license but it's limited, as it doesn't work on ASM, it can recover all data under system datafiles but for other datafiles it will recover only less than 1% of all rows.

Note: In this demonstration I've used the TRIAL version, this is why I was restoring tables located under system tablespace only to be able to recover all data.

Install ODU:
===========
Upload the downloaded file to the server and then extract it:

# gunzip odu_433_linux_x86.tar.gz 
# tar xvf odu_433_linux_x86.tar 
# cd odu
# chown u+x odu

Prepare ODU:
============
Before start using ODU for recovering data, first you have to modify the configuration files to point to the right path of datafiles.

In case the datafiles stored on ASM modify asmdisk.txt by adding ASM disks path to asmdisk.txt file:

# vi asmdisk.txt
0 /oradata/asm/disk1.dbf 
0 /oradata/asm/disk2.dbf 
0 /oradata/asm/disk3.dbf 

Note: The first column should be filled with 0

In case the datafiles are not stored on ASM, add all datafiles path to control.txt file:
This SQL can help you generate the lines you will add to control.txt :
SQL> SELECT 0,0,0,NAME FROM V$DATAFILE ORDER BY FILE#;

# vi control.txt
0 0 0 /database/orcl/datafiles/system01.dbf
0 0 0 /database/orcl/datafiles/users01.dbf
0 0 0 /database/orcl/datafiles/sysaux01.dbf
0 0 0 /database/orcl/datafiles/undotbs01.dbf
0 0 0 /database/orcl/datafiles/example01.dbf

Note:  The first three columns should be filled with 0

The rest of ODU configuration files like config.txt, contains configurations like block_size which must be the same for the database you are extracting data from.
In config.txt the parameter output_format represent the fashion where the data will be saved after extracting (unload) it from tables. the default value is text where data will be saved in SQL LOADER format, the other value is dump where data will be saved in export dump format.

In case you are using the full version, arrange for the license by save the configuration in a file and send it to ODU support to send you back the license file:
Save the controls to oductl.txt:
# ./odu
ODU> save control 
ODU> exit 

Send the file oductl.txt to the ODU support mailbox in http://www.oracleodu.com/en/support, they will send you the license file named oductl.dat, copy it under ODU installation directory.

* Note that Trial version doesn't need a license but it's data restoration capability is limited as mentioned before.

Using ODU:
==========

Discover the database:
>>>>>>>>>>>>>>>>>>>>
# ./odu

ODU> load config [filename] #Not mandatory, if file name ommited it will load config.txt file
ODU> open [filename] #Not mandatory, if file name ommited it will load control.txt file

ODU> unload dict         #Load dictionary data from SYSTEM tablespace.
ODU> list user         #List database users & roles.
ODU> desc scott.emp #descripe a table plus provide extra data like object id, tablespace,..etc.

Restore data of a truncated table: [TEST SCENARIO]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
*control.txt file must be loaded with datafiles location as shown before.

SQL> CREATE TABLE SCOTT.TEST TABLESPACE USERS AS SELECT * FROM DBA_USERS;
SQL> SELECT COUNT(*) FROM SCOTT.TEST;
SQL> TRUNCATE TABLE SCOTT.TEST;
SQL> alter system checkpoint;
SQL> alter tablespace users offline; #important/not mandatory to offline the tablespace of truncated table to avoid space overwrite.
SQL> alter system checkpoint; #important to write changes to the disk or may ODU will not find the table.

# ./odu

ODU> load config [filename] #Not mandatory, if file name ommited it will load config.txt file
ODU> open [filename] #Not mandatory, if file name ommited it will load control.txt file

ODU> unload dict         #Load dictionary data from SYSTEM tablespace.
ODU> desc scott.test #Get information of truncated table location (check it's tablespace number: TS#=<> )
Storage(Obj#=55125 DataObj#=55126 TS#=4 File#=4 Block#=403 Cluster=0)

ODU> scan extent tablespace 4 #Scan the extents of the tablespace # of the truncated table.
ODU> unload table scott.test object truncate #Restore all rows of truncated table in a SQL LOADER file.

SQL> alter tablespace users online; #Online back the tablespace of truncated to load back truncated data.

Use SQL LOADER to load the data back to the truncated table:
# cd <odu location>/data
# sqlldr scott/tiger control=SCOTT_TEST.ctl 

Note: In case you want to restore the full data on a table not located on system tablespace, you must use the full licensed version of ODU.

Restore a Dropped Table:
>>>>>>>>>>>>>>>>>>>>>>>
Let's create and drop a table for testing:

SQL> create table scott.bb tablespace system as select * from dba_synonyms;
SQL> drop table scott.bb;
SQL> alter system checkpoint;

Before using ODU the first task to do is to get the OBJECT ID for the dropped table using Log Miner feature in Oracle:
--Add redo/archive that contain the drop transaction:
SQL> EXEC sys.dbms_logmnr.add_logfile(logfilename=>'/u01/app/oracle/oradata/xty/redo03.log'); 
--Start mining the registered archives:
SQL> EXEC sys.dbms_logmnr.start_logmnr(options=>sys.dbms_logmnr.dict_from_online_catalog);    
--Get the execution time of the drop statement:
SQL> SELECT scn,timestamp,sql_redo from v$logmnr_contents where operation='DDL' and sql_redo like '%drop%' order by 2 ; 
--Use the previous execution time to get the internal delete statement of the dropped table from the dictionary, where the object id can found:
SQL> SELECT scn,timestamp,sql_redo from v$logmnr_contents
     where timestamp=to_date('2014-05-21 10:33:33','yyyy-mm-dd hh24:mi:ss') and sql_redo like '%OBJ%' order by 1;
--End the Log Miner task:
SQL> EXEC sys.dbms_logmnr.end_logmnr; 

Now we suppose to have the object id of the dropped table.

In case you're just testing and don't want to go through this leghnthy process before dropping the table just get it's object id from ODU:
# ./odu
ODU> unload dict
ODU> desc scott.bb #you'll find the object id in the output.

Start the table restoration job using ODU:
# ./odu
ODU> unload dict #Load dictionary data from SYSTEM tablespace.
ODU> scan extent tablespace 0 #scan extents of tablespace 0 (system)
ODU> unload object all tablespace 0 sample #This will provide you with the command you shall use to restore for EVERY object inside tablespace 0
ODU> unload user scott #Unload ALL tables owned by user SCOTT (each table will have 3 files located under data directory).
ODU> unload object 55150 sample #This will provide you with the command you shall use to restore for SPECIFIC object inside tablespace 0.
ODU> unload object 55150 tablespace 0 column VARCHAR2 VARCHAR2 VARCHAR2 VARCHAR2 #Just copy & execute the command from the previous output.

Note: unload command will generate three files for each unloaded table ".txt",".sql",".ctl"

From OS side modify file <odu location>/data/ODU_0000055150.sql> by changing the name of object from ODU_0000055150 to SCOTT.BB and execute it:
SQL> @<odu location>/data/ODU_0000055150.sql>
Table created.

From OS modify file <odu location>/data/ODU_0000055150.ctl> change the name of object from ODU_0000055150 to SCOTT.BB and start load the table with data using SQLLOADER:
# sqlldr scott/tiger control=ODU_0000055150.ctl


Export Data from an existing Table: [Database is down / Skipping Auditing/Permissions]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# ./odu
ODU> unload dict #Load dictionary data from SYSTEM tablespace.
ODU> desc scott.bb #Scan the extents of the tablespace number of the truncated table.
ODU> scan extent tablespace 0 #Scan the extents of the tablespace #.
ODU> unload table scott.bb object scanned #Export data of table SYS.AA into SQL LOADER file. [object scanned] is not mandatory.

Open the .sql file and change the table owner&name if you wish, do the same for .ctl file then use SQLLOADER to import the data.
# sqlldr scott/tiger control=SCOTT_AA.ctl


Conclusion:
ODU tool can help restoring data in a catastrophic scenarios where there is no available backup or database has damaged or cannot be opened due to any reason,
on the other hand this tool prove that database security can be easily compromised by anyone have access to database files and have a basic knowledge in using such tools, in order to avoid such breach the DBA should consider encrypting the data on filesystem level using an encryption feature such as TDE, also security hardening and limiting the access on the operating system level should be considered to prevent un-authorized users from accessing the server or the file system storage.

REFERENCE:
Full documentation: http://www.oracleodu.com/soft/ODUUserGuide_en.pdf
Real success story: http://www.basecare.dk/6storage/983/7/recovering_lob-index_block_corruption.pdf

Tuesday, 5 August 2014

Recover An RMAN Image Copy Backup From Incremental Backup

In my backup strategy I used to consider an RMAN image backup for my production databases that have  sufficient free disk space on the backup location.

But first what is RMAN Image/Copy Backup:
Starting with Oracle 10g, RMAN can backup datafiles as an image copy, the image copy will be a duplicate copy of the datafile contains all allocated+free blocks, it's much similar to the hot backup method where you put the tablespace in begin backup mode then from OS side copy the datafiles to the backup location then put the tablespace in end backup mode, but RMAN takes care of such mechanism automatically.

Why you should consider RMAN Image/Copy Backup in your backup strategy:
The biggest advantage is that you can switch a physically corrupted/lost/damaged datafiles to refer to it's image copy backup then recover the datafile and it will be ready for use without the need to restore the lost datafile back, the thing safe a huge time by skipping the restoration process.

How to perform an RMAN Image/Copy Backup for your database:
Just connect to the RMAN and execute this simple command:
RMAN> backup as copy database format '/backupdisk/rmanbkps/copy/%U' tag='DB_COPY' plus archivelog;
Note: The free space of backup location should be same or greater than the size of the database.
Note: "plus archivelog" , will backup archivelogs before and after the backup to guarantee a consistent restoration of the backup.How to Recover an RMAN Image/Copy Backup:
If you will consider RMAN image/copy backups in your backup strategy you should consider recovering the image copy instead of creating a new image copy every time you perform a backup.
By recovering the image copy you will need only to recover the already exist image backup with the transactions committed on the database since the last backup, the thing leads to cuts down the backup time and CPU overhead during the backup window.

  Advantages of recovering an Image backup:
  > Reduce the impact caused by a complete backup.
  > Reduce the time of backup window.
  > Reduce network bandwidth in case of backing up over the network.
  > Can be used on a NOARCHIVELOG DB (mount mode).

How to perform recovery of RMAN Image backup:
You will consider using the following script for the whole operation:

# rman target /
RMAN> run{ 
              ALLOCATE CHANNEL C1 DEVICE TYPE DISK FORMAT '/FRA/RMANBKP/%U';
              BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 1 FOR RECOVER
              OF COPY WITH TAG 'FULL_IMAGE_UPDTD_BKP' DATABASE FORMAT
              '/FRA/RMANBKP/%d_%t_%s_%p';
              RECOVER COPY OF DATABASE WITH TAG ='FULL_IMAGE_UPDTD_BKP';
              DELETE NOPROMPT BACKUP TAG ='FULL_IMAGE_UPDTD_BKP';
              }

How it works:
> The FIRST run of that script, it will try to create a compressed incremental level 1 backup but will fail due to no base parent incremental level 0 available and will throw this message:
"no parent backup or copy of datafile xx found"
Then it will create the first image Copy backup with tag FULL_IMAGE_UPDTD_BKP and will fail to recover it from incr level 1 as there is no incr level 1 available at that time and will throw this messages:
"no copy of datafile xx found to recover"

> The SECOND run, it will create the first compressed incremental level 1 backup then will use it to recover the already exist image copy from the first run.
At the end, the last command will delete the incremental level 1 backup been used for the recovery of the image backup. In case you don't have a block change tracking file for the database, it's recommended to not delete the last created incremental level 1 backup.

And so on for the next runs ...

Facts:
> Each time a datafile added to the database a new image copy will be created when running above script.
> Tag must be used to identify the image backup.
> Archivelogs are not get involved, so nologging operations will be included in this operation.

To speed up the recovery of Image backup, consider to create a Block Change Tracking file:
SQL> Alter database enable block change tracking using file '/backupdisk/BCT.log' reuse;

> Block Change Tracking help in speeding up the process of creating an incremental backup.
> In RAC environment block change tracking file should be located on shared location between RAC nodes.
> Block Change Tracking realted info can be found in V$BLOCK_CHANGE_TRACKING view.
> Block Change Tracking file size is approximately=1/30000 of data blocks size to be tracked.
> Once you enable block change tracking there is a new process called [CTWR] will handle the read/write to the BCT file.

Note: The disadvantage of creating a block change tracking file is that it will introduce a small performance overhead during the database normal activities.

For more information in how to use the image copy backup in the disaster scenarios please check this line:
http://dba-tips.blogspot.ae/2011/11/switch-database-to-rman-copy-backup-and.html


Reference:
http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV239
http://www.oracle-base.com/articles/10g/rman-enhancements-10g.php
http://www.pythian.com/documents/Pythian-oracle-block-change.pdf