Monday, 25 August 2014

Recover Data from Damaged/Corrupted/Inconsistent/Offline/Truncated/Dropped Table While the Database is Offline

When it comes to catastrophic scenarios where the database has been corrupted and there is no available backup, flashback feature was not turned on, even when database cannot be open due to errors or inconsistent status, there is still a chance to recover and save the data as long as the datafiles are still exist using third party tools that can directly read from datafiles without the need to open the database.
In this post I'll discuss one of these tools, Oracle Database Unloader (ODU).

Download ODU:
==============
http://www.oracleodu.com/en/download

Full version: It restores all data only if you got the license file from the support [Not Free].
Trial version: It doesn't need license but it's limited, as it doesn't work on ASM, it can recover all data under system datafiles but for other datafiles it will recover only less than 1% of all rows.

Note: In this demonstration I've used the TRIAL version, this is why I was restoring tables located under system tablespace only to be able to recover all data.

Install ODU:
===========
Upload the downloaded file to the server and then extract it:

# gunzip odu_433_linux_x86.tar.gz 
# tar xvf odu_433_linux_x86.tar 
# cd odu
# chown u+x odu

Prepare ODU:
============
Before start using ODU for recovering data, first you have to modify the configuration files to point to the right path of datafiles.

In case the datafiles stored on ASM modify asmdisk.txt by adding ASM disks path to asmdisk.txt file:

# vi asmdisk.txt
0 /oradata/asm/disk1.dbf 
0 /oradata/asm/disk2.dbf 
0 /oradata/asm/disk3.dbf 

Note: The first column should be filled with 0

In case the datafiles are not stored on ASM, add all datafiles path to control.txt file:
This SQL can help you generate the lines you will add to control.txt :
SQL> SELECT 0,0,0,NAME FROM V$DATAFILE ORDER BY FILE#;

# vi control.txt
0 0 0 /database/orcl/datafiles/system01.dbf
0 0 0 /database/orcl/datafiles/users01.dbf
0 0 0 /database/orcl/datafiles/sysaux01.dbf
0 0 0 /database/orcl/datafiles/undotbs01.dbf
0 0 0 /database/orcl/datafiles/example01.dbf

Note:  The first three columns should be filled with 0

The rest of ODU configuration files like config.txt, contains configurations like block_size which must be the same for the database you are extracting data from.
In config.txt the parameter output_format represent the fashion where the data will be saved after extracting (unload) it from tables. the default value is text where data will be saved in SQL LOADER format, the other value is dump where data will be saved in export dump format.

In case you are using the full version, arrange for the license by save the configuration in a file and send it to ODU support to send you back the license file:
Save the controls to oductl.txt:
# ./odu
ODU> save control 
ODU> exit 

Send the file oductl.txt to the ODU support mailbox in http://www.oracleodu.com/en/support, they will send you the license file named oductl.dat, copy it under ODU installation directory.

* Note that Trial version doesn't need a license but it's data restoration capability is limited as mentioned before.

Using ODU:
==========

Discover the database:
>>>>>>>>>>>>>>>>>>>>
# ./odu

ODU> load config [filename] #Not mandatory, if file name ommited it will load config.txt file
ODU> open [filename] #Not mandatory, if file name ommited it will load control.txt file

ODU> unload dict         #Load dictionary data from SYSTEM tablespace.
ODU> list user         #List database users & roles.
ODU> desc scott.emp #descripe a table plus provide extra data like object id, tablespace,..etc.

Restore data of a truncated table: [TEST SCENARIO]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
*control.txt file must be loaded with datafiles location as shown before.

SQL> CREATE TABLE SCOTT.TEST TABLESPACE USERS AS SELECT * FROM DBA_USERS;
SQL> SELECT COUNT(*) FROM SCOTT.TEST;
SQL> TRUNCATE TABLE SCOTT.TEST;
SQL> alter system checkpoint;
SQL> alter tablespace users offline; #important/not mandatory to offline the tablespace of truncated table to avoid space overwrite.
SQL> alter system checkpoint; #important to write changes to the disk or may ODU will not find the table.

# ./odu

ODU> load config [filename] #Not mandatory, if file name ommited it will load config.txt file
ODU> open [filename] #Not mandatory, if file name ommited it will load control.txt file

ODU> unload dict         #Load dictionary data from SYSTEM tablespace.
ODU> desc scott.test #Get information of truncated table location (check it's tablespace number: TS#=<> )
Storage(Obj#=55125 DataObj#=55126 TS#=4 File#=4 Block#=403 Cluster=0)

ODU> scan extent tablespace 4 #Scan the extents of the tablespace # of the truncated table.
ODU> unload table scott.test object truncate #Restore all rows of truncated table in a SQL LOADER file.

SQL> alter tablespace users online; #Online back the tablespace of truncated to load back truncated data.

Use SQL LOADER to load the data back to the truncated table:
# cd <odu location>/data
# sqlldr scott/tiger control=SCOTT_TEST.ctl 

Note: In case you want to restore the full data on a table not located on system tablespace, you must use the full licensed version of ODU.

Restore a Dropped Table:
>>>>>>>>>>>>>>>>>>>>>>>
Let's create and drop a table for testing:

SQL> create table scott.bb tablespace system as select * from dba_synonyms;
SQL> drop table scott.bb;
SQL> alter system checkpoint;

Before using ODU the first task to do is to get the OBJECT ID for the dropped table using Log Miner feature in Oracle:
--Add redo/archive that contain the drop transaction:
SQL> EXEC sys.dbms_logmnr.add_logfile(logfilename=>'/u01/app/oracle/oradata/xty/redo03.log'); 
--Start mining the registered archives:
SQL> EXEC sys.dbms_logmnr.start_logmnr(options=>sys.dbms_logmnr.dict_from_online_catalog);    
--Get the execution time of the drop statement:
SQL> SELECT scn,timestamp,sql_redo from v$logmnr_contents where operation='DDL' and sql_redo like '%drop%' order by 2 ; 
--Use the previous execution time to get the internal delete statement of the dropped table from the dictionary, where the object id can found:
SQL> SELECT scn,timestamp,sql_redo from v$logmnr_contents
     where timestamp=to_date('2014-05-21 10:33:33','yyyy-mm-dd hh24:mi:ss') and sql_redo like '%OBJ%' order by 1;
--End the Log Miner task:
SQL> EXEC sys.dbms_logmnr.end_logmnr; 

Now we suppose to have the object id of the dropped table.

In case you're just testing and don't want to go through this leghnthy process before dropping the table just get it's object id from ODU:
# ./odu
ODU> unload dict
ODU> desc scott.bb #you'll find the object id in the output.

Start the table restoration job using ODU:
# ./odu
ODU> unload dict #Load dictionary data from SYSTEM tablespace.
ODU> scan extent tablespace 0 #scan extents of tablespace 0 (system)
ODU> unload object all tablespace 0 sample #This will provide you with the command you shall use to restore for EVERY object inside tablespace 0
ODU> unload user scott #Unload ALL tables owned by user SCOTT (each table will have 3 files located under data directory).
ODU> unload object 55150 sample #This will provide you with the command you shall use to restore for SPECIFIC object inside tablespace 0.
ODU> unload object 55150 tablespace 0 column VARCHAR2 VARCHAR2 VARCHAR2 VARCHAR2 #Just copy & execute the command from the previous output.

Note: unload command will generate three files for each unloaded table ".txt",".sql",".ctl"

From OS side modify file <odu location>/data/ODU_0000055150.sql> by changing the name of object from ODU_0000055150 to SCOTT.BB and execute it:
SQL> @<odu location>/data/ODU_0000055150.sql>
Table created.

From OS modify file <odu location>/data/ODU_0000055150.ctl> change the name of object from ODU_0000055150 to SCOTT.BB and start load the table with data using SQLLOADER:
# sqlldr scott/tiger control=ODU_0000055150.ctl


Export Data from an existing Table: [Database is down / Skipping Auditing/Permissions]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# ./odu
ODU> unload dict #Load dictionary data from SYSTEM tablespace.
ODU> desc scott.bb #Scan the extents of the tablespace number of the truncated table.
ODU> scan extent tablespace 0 #Scan the extents of the tablespace #.
ODU> unload table scott.bb object scanned #Export data of table SYS.AA into SQL LOADER file. [object scanned] is not mandatory.

Open the .sql file and change the table owner&name if you wish, do the same for .ctl file then use SQLLOADER to import the data.
# sqlldr scott/tiger control=SCOTT_AA.ctl


Conclusion:
ODU tool can help restoring data in a catastrophic scenarios where there is no available backup or database has damaged or cannot be opened due to any reason,
on the other hand this tool prove that database security can be easily compromised by anyone have access to database files and have a basic knowledge in using such tools, in order to avoid such breach the DBA should consider encrypting the data on filesystem level using an encryption feature such as TDE, also security hardening and limiting the access on the operating system level should be considered to prevent un-authorized users from accessing the server or the file system storage.

REFERENCE:
Full documentation: http://www.oracleodu.com/soft/ODUUserGuide_en.pdf
Real success story: http://www.basecare.dk/6storage/983/7/recovering_lob-index_block_corruption.pdf

Thursday, 21 August 2014

Standard Auditing In Oracle


Auditing is divided into three types:
  > SYS Activities Auditing.
  > System and Object Privileges Auditing.
  > Fine Grained Auditing (FGA).

Note: Auditing by triggers is not within the scope of this article.

Before I start you have to diffrentiate between SYSTEM and OBJECT privileges:

SYSTEM PRIVILEGES are:  

Alter(session, any XXX, user, profile, tablespace), SYSgrant, Create, drop, truncate, select any, update any, delete any 
 
OBJECT  PRIVILEGES are:

Alter(OBJECT like: tab,view,..), audit, grant(on OBJ),  locate, rename,select, insert, update, delete, execute.
 

Now let's explain each type in details.

########################################################################
SYS Activities Auditing:
########################################################################

> Will audit all activities for the users who log in to the DB with SYSDBA privilege, audit records will be stored on the filesystem.
> Auditing SYS/OBJ privileges will never audit SYSDBA privilege.
> Activate it by setting the parameter "AUDIT_SYS_OPERATIONS" to TRUE.
> Audit file location controlled by AUDIT_FILE_DEST initialization parameter, DEFAULT is $ORACLE_HOME/rdbms/audit or adump.

In order to audit statements ran by SYS user, the following intialization parameters should be set:
SQL> ALTER SYSTEM SET AUDIT_SYS_OPERATIONS= TRUE SCOPE=SPFILE;
SQL> ALTER SYSTEM SET AUDIT_FILE_DEST=<destination of audit files> SCOPE=SPFILE;
SQL> ALTER SYSTEM SET AUDIT_SYSLOG_LEVEL =<OS service log level> SCOPE=SPFILE;

Special Task:
Forward Audit SYS Operations records to FILE accessed By ROOT only:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
In case you don't want to allow people with no root privilege on the server to read/write the audit logs:

Set the following intialization parameter:
SQL> ALTER SYSTEM SET audit_syslog_level='local5.info' SCOPE=SPFILE;

On OS: [By Root]
--------
# vi /etc/syslog.conf

Add this parameter:
local5.info     /var/log/oracle_audit_sys.log

Restart syslog service:
# /sbin/service syslog restart

Now you can view audit records in /var/log/oracle_audit_sys.log which owned by ROOT user.
Warning: After forwarding Audit records to OS logging service:
      > Records will be in a difficult shape to read.
      > Fine-grained FGA, Oracle Label Security, Oracle Database Vault audit records will not be captured.


########################################################################
System Privileges Auditing:
########################################################################

> Audit SYSTEM PRIVILEGES like: [Create, drop, truncate, select any, update any, delete any, Alter(session,any.,user,profile,tablespace),SYSgrant]
 when used by ALL|SPECIFIC users (Except the ones connected with SYSDBA).
> Audit data can be found in "DBA_PRIV_AUDIT_OPTS" view.
> Audit records stored in SYS.AUD$ table can be viewed in DBA_AUDIT_TRAIL view.
> Auditing should be enabled on the database by setting the parameter "AUDIT_TRAIL" to value other than "NONE".
> Will not work for users connected with SYSDBA privilege.

Set AUDIT_TRAIL Initialization Parameter:
-----------------------------------------------------

SQL> ALTER SYSTEM SET AUDIT_TRAIL=DB SCOPE=SPFILE;

AUDIT_TRAIL=NONE    =>Disable Auditing.
AUDIT_TRAIL=DB        =>Enable Auditing (without auditing SQL STMTS/BINDS), records stored in the DB. [half DB_EXTENDED Performance Overhead 8%]
AUDIT_TRAIL=DB_EXTENDED =>Enable Auditing (with auditing SQL STMTS/BINDS), records stored in the DB. [Highest Performace Overhead 16%] 2xDB
AUDIT_TRAIL=OS        =>Enable Auditing ,records stored in the OS Audit Trail. [Lowest Performace Overhead 2%]
AUDIT_TRAIL=XML        =>Enable Auditing (without auditing SQL STMTS/BINDS), records stored in the OS (XML Format). [Overhead Lower than DB 3.5%]
AUDIT_TRAIL=XML_EXTENDED=>Enable Auditing (with auditing SQL STMTS/BINDS), records stored in the OS (XML Format). [Overhead Lower than DB 5%]

Performance stats reference quoted from:
http://www.oracle.com/technetwork/products/audit-vault/learnmore/twp-security-auditperformance-166655.pdf

Examples:
------------
SQL> AUDIT ALL;

=>Audit all system privileges Except the following privileges:
--ALTER SEQUENCE
--ALTER TABLE
--DELETE TABLE
--SELECT ANY TABLE
--COMMENT TABLE
--GRANT TABLE
--GRANT TYPE
--GRANT DIRECTORY
--GRANT PROCEDURE
--GRANT SEQUENCE

SQL> AUDIT RESOURCE BY SCOTT;
=>Audit ALTER SYSTEM, CLUSTER, DATABASE LINK, PROCEDURE, ROLLBACK SEGMENT, SEQUENCE, SYNONYM, TABLE, TABLESPACE, VIEW when used by SCOTT only.

SQL> AUDIT ALL BY SCOTT;
=>AUDIT ALL Statements when issued by SCOTT user.

SQL> NOAUDIT ALL by SCOTT;

SQL> NOAUDIT ALL;
=>Will not audit System privileges (Will not affect on OBJECT privilege auditing).

SQL> audit ALL PRIVILEGES;
=>Audit ALL SYSTEM PRIVILEGES <Not RECOMMENDED>.

SQL> NOAUDIT ALL PRIVILEGES;
=>Don't audit ANY SYSTEM PRIVILEGES <Not RECOMMENDED>.

The following are some auditing techniques:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Turning ON SYSTEM PRIVILEGES auditing as per 11g settings: <secconf.sql script>
------------------------------------------------------------------------------
Audit ALTER ANY TABLE  by access;
Audit CREATE ANY TABLE by access;
Audit DROP ANY TABLE   by access;
Audit CREATE ANY PROCEDURE by access;
Audit DROP ANY PROCEDURE   by access;
Audit ALTER ANY PROCEDURE  by access;
Audit GRANT ANY PRIVILEGE  by access;
Audit GRANT ANY OBJECT PRIVILEGE  by access;
Audit GRANT ANY ROLE   by access;
Audit AUDIT SYSTEM     by access;
Audit CREATE EXTERNAL JOB  by access;
Audit CREATE ANY JOB   by access;
Audit CREATE ANY LIBRARY   by access;
Audit CREATE PUBLIC DATABASE LINK by access;
Audit EXEMPT ACCESS POLICY by access;
Audit ALTER USER       by access;
Audit CREATE USER      by access;
Audit ROLE             by access;
Audit CREATE SESSION   by access;
Audit DROP USER        by access;
Audit ALTER DATABASE   by access;
Audit ALTER SYSTEM     by access;
Audit ALTER PROFILE    by access;
Audit DROP PROFILE     by access;
Audit DATABASE LINK    by access;
Audit SYSTEM AUDIT     by access;
Audit PROFILE          by access;
Audit PUBLIC SYNONYM   by access;
Audit SYSTEM GRANT     by access;

Extreme Audit (All DDL): (audit by Statemtnt)
-----------------------
Audit RESOURCE;        --ALTER SYSTEM, CLUSTER, DATABASE LINK, PROCEDURE, ROLLBACK SEGMENT, SEQUENCE, SYNONYM, TABLE, TABLESPACE, VIEW.
Audit CLUSTER;        --CREATE, DROP, ALTER, TRUNCATE CLUSTER.
Audit CONTEXT;        --CREATE, DROP CONTEXT.
Audit DATABASE LINK;    --CREATE, DROP DATABASE LINK.
Audit DIMENSION;    --CREATE, DROP, ALTER DIMENSION.
Audit DIRECTORY;    --CREATE, DROP DIRECTORY.
Audit INDEX;        --CREATE, DROP, ALTER, ANALYZE INDEX.
Audit MATERIALIZED VIEW;--CREATE, DROP, ALTER  MATERIALIZED VIEW.
Audit OUTLINE;        --CREATE, DROP, ALTER  OUTLINE.
Audit PROCEDURE;    --CREATE FUNCTION/PACKAGE/PACKAGE BODY/PROCEDURE/LIBRARY, DROP FUNCTION/PACKAGE/PACKAGE BODY/PROCEDURE/LIBRARY.
Audit PROFILE;        --CREATE, DROP, ALTER PROFILE.
Audit PUBLIC DATABASE LINK; --CREATE, DROP PUBLIC DATABASE LINK.
Audit PUBLIC SYNONYM;    --CREATE, DROP PUBLIC SYNONYM.
Audit ROLE;        --CREATE, DROP, ALTER, SET ROLE.
Audit ROLLBACK SEGMENT; --CREATE, DROP, ALTER ROLLBACK SEGMENT.
Audit SEQUENCE;        --CREATE, DROP SEQUENCE.
Audit SESSION;         --LOGIN / LOGOFF. Audit specific users=> "AUDIT SESSION BY jeff, scott;"
Audit SYNONYM;        --CREATE, DROP SYNONYM.
Audit SYSTEM AUDIT;    --AUDIT, NOAUDIT.
Audit SYSTEM GRANT;    --GRANT SYSTEM PRIVILEGES/ROLES TO USER/ROLE, REVOKE SYSTEM PRIVILEGES/ROLES FROM USER/ROLE
Audit TABLE;        --CREATE, DROP, TRUNCATE TABLE. <<<No ALTER>>>
Audit TABLESPACE;    --CREATE, DROP, ALTER TABLESPACE.
Audit TRIGGER;        --CREATE, DROP, ALTER TRIGGER, ALTER TABLE ENABLE/DISABLE TRIGGER.
Audit TYPE;
Audit USER;        --CREATE, DROP, ALTER USER.
Audit VIEW;        --CREATE, DROP VIEW.
Audit ALTER SEQUENCE;
Audit ALTER TABLE;
Audit DELETE TABLE;    --DELETE FROM TABLE/VIEW.
Audit COMMENT TABLE;
Audit GRANT DIRECTORY;    --GRANT, REVOKE on DIRECTORY.
Audit GRANT PROCEDURE;    --GRANT, REVOKE on PROCEDURE.
Audit GRANT SEQUENCE;    --GRANT, REVOKE on SEQUENCE.
Audit GRANT TABLE;    --GRANT, REVOKE on TABLE/VIEW/MATERIALIZED VIEW.
Audit GRANT TYPE;    --GRANT, REVOKE on TYPE.
Audit NOT EXISTS;     --audit SQL stmts that fail because the target object does not exist.

Audit Everything: <Paranoied> "NOT RECOMMENDED AT ALL"
---------------------
SQL> Audit ALL PRIVILEGES;
SQL> Audit ALL STATEMENTS;

 >Rollback:
SQL>   NOAUDIT ALL PRIVILEGES;
SQL>   NOAUDIT ALL;

Note: Above statements will audit SYSTEM PRIVILEGES when been used successfully or when they generated an error,
You limit this behavior by specifying audit mode SUCCESSFUL/ NOT SUCCESSFUL;:
e.g.
SQL> Audit table WHENEVER NOT SUCCESSFUL;
-->This will audit all failed SYSTEM PRIVILEGES commands against any table in the DB for all users.

Checking SYSTEM PRIVILEGES audit data: (DBA_PRIV_AUDIT_OPTS)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SQL> select user_name,PRIVILEGE,success,failure from DBA_PRIV_AUDIT_OPTS;

USER_NAME               PRIVILEGE                SUCCESS    FAILURE
------------------------------ ---------------------------------------- ---------- ----------
                   CREATE ANY TRIGGER            BY ACCESS  BY ACCESS
                   DROP ANY TABLE                BY ACCESS  BY ACCESS
SYSADM                   SELECT ANY TABLE             BY SESSION BY SESSION
AA                   SELECT ANY TABLE             BY SESSION BY SESSION
                   UPDATE ANY TABLE             BY ACCESS  BY ACCESS
.....

Note: Blank field under USER_NAME means "audit for all users", that means "ALTER ANY TABLE" privilege will be audited for all users who will use it whenever successful or not successful.
Note: By default, some of SYSTEM PRIVILEGES are being audited, even if you didn't audit them manually.

########################################################################
Object Privileges Auditing:
########################################################################

> Audit object privileges like [select, insert, update, delete, execute, alter(OBJECT>table,view,..), audit, grant(on OBJ), locate, rename]
 on an object when used by ALL USERS (Except the ones connected with SYSDBA).
> Audit data can be found in "DBA_OBJ_AUDIT_OPTS" view.
> Audit records stored in SYS.AUD$ table can be viewed in DBA_AUDIT_TRAIL view.
> Auditing should be enabled on the database by setting the parameter "AUDIT_TRAIL" to value other than "NONE".

AUDIT specific privileges on specific table:
SQL> AUDIT SELECT,update ON scott.emp;
     =>Audit any user try USE PRIVILEGE SELECT or UPDATE on scott.emp (Except SYSDBA user).

SQL> AUDIT ALL ON scott.emp;
     =>Audit any user try USE any OBJECT PRIVILEGE on scott.emp (Except SYSDBA user).

SQL> NOAUDIT SELECT,update ON scott.emp;
     =>Remove specific OBJECT PRIVIEGES auditing on scott.emp.

SQL> NOAUDIT ALL ON scott.emp;
     =>Remove ALL OBJECT PRIVIEGES auditing on scott.emp.

Audit specific privileges on all tables inside a schema:

i.e. audit (DROP, TRUNCATE, ALTER) privilege on all tables under specific schema:

select 'AUDIT AUDIT,ALTER ON '||owner||'.'||table_name||' BY ACCESS;' from dba_tables where owner=upper('&USERNAME');
Extreme Object Privilege AUDITING: <Not RECOMMENDED>
===========================
SQL> AUDIT SELECT TABLE;    --SELECT FROM table/view/materialized view.
SQL> AUDIT INSERT TABLE;    --INSERT INTO table/view.
SQL> AUDIT UPDATE TABLE;    --UPDATE TABLE/VIEW.
SQL> AUDIT LOCK   TABLE;    --LOCK TABLE table/view.
SQL> AUDIT SELECT SEQUENCE;    --Any statement containing sequence.CURRVAL or sequence.NEXTVAL.

Checking OBJECT PRIVILEGES audit data: (DBA_OBJ_AUDIT_OPTS)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SQL> select * from DBA_OBJ_AUDIT_OPTS;

OWNER                   OBJECT_NAME              OBJECT_TYPE          ALT AUD COM DEL GRA IND INS LOC
------------------------------ ------------------------------ ----------------------- --- --- --- --- --- --- --- ---
REN SEL UPD REF EXE CRE REA WRI FBK
--- --- --- --- --- --- --- --- ---
AA                   AA                  TABLE              -/- -/- -/- -/- -/- -/- -/- -/-
-/- A/A -/- -/- -/- -/- -/- -/- -/-
.....

Note: There is no OBJECT PRIVILEGES are being audited by default. they need to be setup manually.
Note: In case you will query DBA_AUDIT_TRAIL, ACTION_NAME FOR OBJECT Privileges will be named as "SESSION REC".


View Audit data: <SYSTEM & OBJECT Privileges Auditing>
============
All Audit data: DBA_AUDIT_TRAIL
-----------------
SQL>
    set linesize 160
    col OS_USERNAME for a15
    col USERNAME for a15
    col TERMINAL for a15
    col ACTION_NAME for a9
    col EXTENDED_TIMESTAMP for a36
    col USERHOST for a21
    col owner for a15
    col OBJ_NAME for a30
    col ACTION_NAME for a15
    col COMMENT_TEXT for a30
    select
    OS_USERNAME,
    USERNAME,
    ACTION_NAME,OWNER,OBJ_NAME,
    EXTENDED_TIMESTAMP from dba_audit_trail where username='SCOTT';


Session LOGIN/LOGOFF data:
-------------------------------------
>Failed logins in the last 24Hours:
 >>>>>>>>>>>>>>>>>>>>>>>>
SQL>
    set linesize 170
    col OS_USERNAME for a15
    col USERNAME for a15
    col TERMINAL for a15
    col ACTION_NAME for a9
    col EXTENDED_TIMESTAMP for a40
    col USERHOST for a21
    col COMMENT_TEXT for a40
    select /*+ parallel 2 */ EXTENDED_TIMESTAMP,OS_USERNAME,USERNAME,TERMINAL,USERHOST,ACTION_NAME,COMMENT_TEXT
        from DBA_AUDIT_TRAIL
    where returncode = 1017
    and timestamp > (sysdate -1)
    order by 1;

>Successful logins during a period of time: -Last 6Hours-
 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
SQL>
    set linesize 170
    col OS_USERNAME for a15
    col USERNAME for a15
    col TERMINAL for a15
    col ACTION_NAME for a9
    col EXTENDED_TIMESTAMP for a40
    col USERHOST for a21
    col COMMENT_TEXT for a40
    select /*+ parallel 2 */ EXTENDED_TIMESTAMP,
    --OS_USERNAME,
    USERNAME,TERMINAL
        ,COMMENT_TEXT,
    ACTION_NAME from DBA_AUDIT_TRAIL
    where
    EXTENDED_TIMESTAMP > (SYSDATE-1/4);

Failed statements due to "XXX does not exist" Error: DBA_AUDIT_EXISTS
--------------------------------------------------------------
SQL>
    set linesize 170
    col OS_USERNAME for a15
    col USERNAME for a15
    col TERMINAL for a15
    col ACTION_NAME for a9
    col EXTENDED_TIMESTAMP for a36
    col USERHOST for a21
    col owner for a15
    col OBJ_NAME for a20
    col ACTION_NAME for a15
    col COMMENT_TEXT for a30
    select
    OS_USERNAME,
    USERNAME,
    sql_text,
    USERHOST,
    ACTION_NAME,OWNER,OBJ_NAME,
    COMMENT_TEXT,
    EXTENDED_TIMESTAMP from DBA_AUDIT_EXISTS where username='AA' order by EXTENDED_TIMESTAMP;


Auditing in a Multitier Environment:
--------------------------------------------
In a multitier environment, Oracle preserves the identity of the client through all tiers, which enables auditing of actions taken on behalf of the client. To do such auditing:
Ex:
SQL> AUDIT SELECT TABLE BY appserve ON BEHALF OF jackson/ANY;
-->audits SELECT TABLE statements issued on behalf of client Jackson by the proxy application server appserver.

Auditing SYS.AUD$:
-------------------------
If you want to audit SYS.AUD$ table against modification by any NON-SYSDBA user:
SQL> AUDIT select, update, delete on sys.aud$;

If a NON-SYSDBA user tries to delete records from SYS.AUD$, the delete will succeed, however, the last delete action will be recorded in SYS.AUD$.


########################################################################
Fine Grained Auditing [FGA]:
########################################################################

> You can check FGA Audit Policies by querying "DBA_AUDIT_POLICIES" view.
> Audit records stored in SYS.FGA_LOG$ table can be viewed in DBA_FGA_AUDIT_TRAIL view.
> FGA Auditing is a sort of OBJECT PRIVILEGES AUDITING but creating a policy to audit ONLY (SELECT, INSERT, UPDATE, DELETE) on (Tables, Views)
 and it can narrow the scope of auditing to audit columns instead of full table and audit specific conditions.
 It can also execute a specific procedure [like sending an email] when audit conditions are met.
> FGA is very powerful in auditing columns holding sensitive data [e.g. SAL, COMM columns in EMP table].

To start using FGA first create the audit policy:

Create Policy Examples:
=================
--<Audit "select statement" on all table DEPT columns>
SQL>
  begin
  dbms_fga.add_policy
  (object_schema=>'SCOTT',
  OBJECT_NAME=>'DEPT',
  POLICY_NAME=>'POLICY1',
  enable => TRUE,
  STATEMENT_TYPES=>'select');
  end;
/

--<Audit "select" on specific columns on table>
SQL>
  begin
  dbms_fga.add_policy
  (object_schema=>'SCOTT',
  OBJECT_NAME=>'EMP',
  POLICY_NAME=>'EMP_POLICY1',
  STATEMENT_TYPES=>'select',
  enable => TRUE,
  audit_column=>'sal,comm');
  end;
/

--<Audit "select" on table if depno 10 been selected>
SQL>
  begin
  dbms_fga.add_policy
  (object_schema=>'SCOTT',
  OBJECT_NAME=>'EMP',
  POLICY_NAME=>'POLICY3',
  STATEMENT_TYPES=>'select',
  enable => TRUE,
  audit_condition=>'deptno=10');
  end;
/

=> It's not a must that SQL statement include "WHERE DEPTNO=10", but if DEPTNO 10 been queried in any statement audit will fire.
   Following queries will fire the audit:
SQL>    select * from emp;
SQL>    select count(*) from emp;

The following query will NOT fire the audit:
SQL>    select ename from emp;
SQL>    select ename from emp where deptno>10;


--<Audit when salary column being queried in deptno=10 plus running a procedure>:
SQL>
 begin
  dbms_fga.add_policy (
  object_schema => 'scott',
  object_name => 'emp',
  policy_name => 'audit_emp_salary',
  audit_condition=> 'deptno=10',
  audit_column => 'salary',
  handler_schema => 'sec',
  handler_module => 'log_emps_salary',
  enable => TRUE,
  statement_types=> 'select' );
  end;
/

handler_module=> Any DB procedure executes when audit happens to do additional actions, like sending an e-mail to the DBA.

Notes:
In case you created an audit policy successfully but it cannot capture any audit data, this may be due to:
 – The audit condition is invalid
 – The event handler does not exist or is invalid

Disable Policy:
===========
SQL>
 begin
 dbms_fga.disable_policy
 (object_schema=>'SCOTT',
  OBJECT_NAME=>'EMP',
  POLICY_NAME=>'emp_POLICY1');
end;
/

Enable Policy:
==========
SQL>
 begin
 dbms_fga.enable_policy
 (object_schema=>'SCOTT',
  OBJECT_NAME=>'EMP',
  POLICY_NAME=>'emp_POLICY1');
end;
/

Drop Policy:
=========
SQL> 
  begin
  dbms_fga.drop_policy
  (object_schema=>'SCOTT',
  OBJECT_NAME=>'EMP',
  POLICY_NAME=>'EMP_POLICY1');
  end;
/

View All FGA policies: DBA_AUDIT_POLICIES
=================
SQL> Select * from DBA_AUDIT_POLICIES;
SQL> Select POLICY_NAME,ENABLED,OBJECT_SCHEMA,OBJECT_NAME,POLICY_COLUMN from DBA_AUDIT_POLICIES;

Note: POLICY_COLUMN sometimes shows only one column (usually the first one in the policy).

View FGA Audit data: DBA_FGA_AUDIT_TRAIL
================
SQL> 
set linesize 170
col USERHOST for a30
col sql_text for a70
select Policy_name,db_user,to_char(timestamp,'DD-MM-YYYY HH24:MI:SS') as time,userhost,sql_bind,sql_text
from dba_fga_audit_trail;
--where policy_name='POLICY2';

###########################################################
AUDIT MAINTENANCE:
###########################################################

Moving the Database Audit Tables outside SYSTEM Tablespace:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(SYS.AUD$ & SYS.FGA_LOG$) tables can be moved from SYSTEM tablespace to any tablespace using following procedure:

Why Moving Audit Tables?:
---------------------------------
It's important to move audit tables from SYSTEM tablespace to an auto extensible tablespace on separate disks to:
 >Eliminate DB hang if audit tables are full.
 >Improve DB performance by decrease the I/O's on SYSTEM TBS.

CAUTION:
*********
This operation will cause the auditing to hang the thing will lead the whole database to hang, perform this job during downtime window.

Move Procedure:
============
SQL> 
BEGIN
 DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
  AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, --Move AUD$ only
--AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD, --Move FGA_LOGS$ only
--AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_DB_STD,  --Move Both AUD$ & FGA_LOGS$
  AUDIT_TRAIL_LOCATION_VALUE  => 'TS_USER_DATA_01');              --New Tablespace
END;
/

Checking audit trail options: (TBS,max_size,age)
---------------------------------
SQL> 
col PARAMETER_NAME for a30
col PARAMETER_VALUE for a30
col AUDIT_TRAIL for a30
select * from DBA_AUDIT_MGMT_CONFIG_PARAMS;

Backup & Purge Audit tables:
########################
Create a job (DB or OS)
Run this sql with changing the number of days you want to keep: here one year data will retain (Aramco standards)

Backing up the historical audit data to a DB table:
-----------------------------------------------------------
SQL> Create table aud$_backup compress tablespace audit2 as select * from AUD$ where ntimestamp# < sysdate-365;
SQL> Create table fga_log$_backup compress tablespace audit2 as select * from FGA_LOG$ where ntimestamp# < sysdate-365;
SQL> Delete from aud$ where ntimestamp#< sysdate-365;
SQL> COMMIT;
SQL> Delete from FGA_LOG$ where ntimestamp# < sysdate-365;
SQL> COMMIT;

Or Export the historical audit data to a dumpfile:
---------------------------------------------------------
# exp \'\/ as sysdba \' TABLES=SYS.AUD$ QUERY=\"where NTIMESTAMP#\<\(\'01\-Jan\-13\'\)\" STATISTICS=none FILE=/backup/AUD_OLDER_JAN2013.dmp log=/backup/exp_AUD_OLDER_JAN2013.log

SQL> delete from sys.aud$ where ntimestamp# < ('01-Jan-2013');
SQL> COMMIT;

Do the same for FGA_LOG$

Note: Exporting a system table like SYS.AUD$ should be done using EXP utility, you cannot use EXPDP utility to export tables under SYS schema.

Freez/UnFreez Auditing using ORADEBUG: 

With using ORADEBUG you can freeze the auditing on the database including audit features like Audit SYS operations and Vault without the need of restarting the instance:

-- Get the Audit process address:
SQL> select fsv.KSMFSNAM,sga.* from x$ksmfsv fsv, x$ksmmem sga where sga.addr=fsv.KSMFSADR and fsv.ksmfsnam like 'kzaflg_%';

KSMFSNAM        ADDR       INDX    INST_ID KSMMMVAL
------------------------------ ---------------- ---------- ---------- ----------------
kzaflg_        0000000060036ED8      28123     1 0000000000000001

Freeze Auditing using above address: <The poke command will be written to the audit trail>
SQL> oradebug setmypid
SQL> oradebug poke 0x60036ED8 1 0
BEFORE: [060036ED8, 060036EDC) = 00000001
AFTER: [060036ED8, 060036EDC) = 00000000

UN-Freeze Auditing:
SQL> oradebug setmypid
SQL> oradebug poke 0x60036ED8 1 1
BEFORE: [060036ED8, 060036EDC) = 00000000
AFTER: [060036ED8, 060036EDC) = 00000001

Note: The only way to avoid having ORADEBUG to do such powerful activities which can compromise the security on your database is by setting parameter "_fifteenth_spare_parameter" to "restricted" to restrict powerful ORADEBUG commands like "poke" or by setting it to "all" to disable all ORADEBUG commands, "NONE" is the default which allows everything.

Tuesday, 5 August 2014

Recover An RMAN Image Copy Backup From Incremental Backup

In my backup strategy I used to consider an RMAN image backup for my production databases that have  sufficient free disk space on the backup location.

But first what is RMAN Image/Copy Backup:
Starting with Oracle 10g, RMAN can backup datafiles as an image copy, the image copy will be a duplicate copy of the datafile contains all allocated+free blocks, it's much similar to the hot backup method where you put the tablespace in begin backup mode then from OS side copy the datafiles to the backup location then put the tablespace in end backup mode, but RMAN takes care of such mechanism automatically.

Why you should consider RMAN Image/Copy Backup in your backup strategy:
The biggest advantage is that you can switch a physically corrupted/lost/damaged datafiles to refer to it's image copy backup then recover the datafile and it will be ready for use without the need to restore the lost datafile back, the thing safe a huge time by skipping the restoration process.

How to perform an RMAN Image/Copy Backup for your database:
Just connect to the RMAN and execute this simple command:
RMAN> backup as copy database format '/backupdisk/rmanbkps/copy/%U' tag='DB_COPY' plus archivelog;
Note: The free space of backup location should be same or greater than the size of the database.
Note: "plus archivelog" , will backup archivelogs before and after the backup to guarantee a consistent restoration of the backup.How to Recover an RMAN Image/Copy Backup:
If you will consider RMAN image/copy backups in your backup strategy you should consider recovering the image copy instead of creating a new image copy every time you perform a backup.
By recovering the image copy you will need only to recover the already exist image backup with the transactions committed on the database since the last backup, the thing leads to cuts down the backup time and CPU overhead during the backup window.

  Advantages of recovering an Image backup:
  > Reduce the impact caused by a complete backup.
  > Reduce the time of backup window.
  > Reduce network bandwidth in case of backing up over the network.
  > Can be used on a NOARCHIVELOG DB (mount mode).

How to perform recovery of RMAN Image backup:
You will consider using the following script for the whole operation:

# rman target /
RMAN> run{ 
              ALLOCATE CHANNEL C1 DEVICE TYPE DISK FORMAT '/FRA/RMANBKP/%U';
              BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 1 FOR RECOVER
              OF COPY WITH TAG 'FULL_IMAGE_UPDTD_BKP' DATABASE FORMAT
              '/FRA/RMANBKP/%d_%t_%s_%p';
              RECOVER COPY OF DATABASE WITH TAG ='FULL_IMAGE_UPDTD_BKP';
              DELETE NOPROMPT BACKUP TAG ='FULL_IMAGE_UPDTD_BKP';
              }

How it works:
> The FIRST run of that script, it will try to create a compressed incremental level 1 backup but will fail due to no base parent incremental level 0 available and will throw this message:
"no parent backup or copy of datafile xx found"
Then it will create the first image Copy backup with tag FULL_IMAGE_UPDTD_BKP and will fail to recover it from incr level 1 as there is no incr level 1 available at that time and will throw this messages:
"no copy of datafile xx found to recover"

> The SECOND run, it will create the first compressed incremental level 1 backup then will use it to recover the already exist image copy from the first run.
At the end, the last command will delete the incremental level 1 backup been used for the recovery of the image backup. In case you don't have a block change tracking file for the database, it's recommended to not delete the last created incremental level 1 backup.

And so on for the next runs ...

Facts:
> Each time a datafile added to the database a new image copy will be created when running above script.
> Tag must be used to identify the image backup.
> Archivelogs are not get involved, so nologging operations will be included in this operation.

To speed up the recovery of Image backup, consider to create a Block Change Tracking file:
SQL> Alter database enable block change tracking using file '/backupdisk/BCT.log' reuse;

> Block Change Tracking help in speeding up the process of creating an incremental backup.
> In RAC environment block change tracking file should be located on shared location between RAC nodes.
> Block Change Tracking realted info can be found in V$BLOCK_CHANGE_TRACKING view.
> Block Change Tracking file size is approximately=1/30000 of data blocks size to be tracked.
> Once you enable block change tracking there is a new process called [CTWR] will handle the read/write to the BCT file.

Note: The disadvantage of creating a block change tracking file is that it will introduce a small performance overhead during the database normal activities.

For more information in how to use the image copy backup in the disaster scenarios please check this line:
http://dba-tips.blogspot.ae/2011/11/switch-database-to-rman-copy-backup-and.html


Reference:
http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmbckba.htm#BRADV239
http://www.oracle-base.com/articles/10g/rman-enhancements-10g.php
http://www.pythian.com/documents/Pythian-oracle-block-change.pdf


Friday, 1 August 2014

Simulate Database I/Os Before Installing A Database

What I'm introducing in this post is Orion tool to simulate database I/O before having Oracle installed on the system. In case you want to calibrate the I/O on system that already have Oracle database installed then the best way is to run DBMS_RESOURCE_MANAGER.calibrate_io by SYS user:

SQL> CONN / AS SYSDBA
SET SERVEROUTPUT ON
  DECLARE
l_latency  PLS_INTEGER;
l_iops     PLS_INTEGER;
l_mbps     PLS_INTEGER;
BEGIN
DBMS_RESOURCE_MANAGER.calibrate_io (num_physical_disks => 1, 
                                            max_latency        => 20,
                                            max_iops           => l_iops,
                                            max_mbps           => l_mbps,
                                            actual_latency     => l_latency);
DBMS_OUTPUT.put_line('Max IOPS = ' || l_iops);
DBMS_OUTPUT.put_line('Max MBPS = ' || l_mbps);
DBMS_OUTPUT.put_line('Latency  = ' || l_latency);
END;
/

Now let's continue with calibrating Oracle I/O's on a system that doesn't have Oracle installed yet.

What is Orion:

> Orion is a tool provided by Oracle to help you simulate database I/O load on disk devices to benchmark your hardware (disks) before having a database on your system.
> Orion is a self dependent tool, it doesn't need a database to be installed on the system before doing the test.
Note: Orion is not supported by Oracle.


How to use Orion:

Download Orion tool:
Go to http://download.oracle.com scroll down to Utilities section and download Orion from there
Note: For the time of writing this post 01-Aug-2014 I've checked Oracle web site but didn't find the link pointing to Orion utility, I guess it been accidentally removed, hopefully Oracle sort this out soon.

Unpack Orion:
# gunzip orion_linux_x86-64.gz
# chmod u+x orion_linux_x86-64


Run Orion:
1) create a file contains all disks you wanna to test in this format "use original device name /dev/xxx":
I recommend to test disk by disk by putting one disk only (one line) in fs.lun file to get a precise result.

# vi fs.lun

/dev/sda
/dev/sdc
/dev/sdb



2) Execute the following command:

# ./orion_linux_x86-64 -run simple -testname fs -num_disks 3

Parameters:
-run         Will determine the level of test:
                Simple     Small and Large Random I/O are tested individually.
                Normal     Same as the simple run level but does combinations of small and random I/Os together.
                Advanced Allows the user to use a wide variety of options to fine tune the workload.
-num_disks    Defines the number of spindles in the storage array that will be tested.
-size_small    Defines the size for small random I/O.
-size_large    Defines the size for large random or sequential I/O.
-type             Defines the type of large I/O (random or sequential).
-write            Defines the percentage of writes in the workload.
-matrix          Defines the mixture of workload to run.

Once the test is done, a summary file <testname>_summary.txt will be created contains a description of the options chosen and some high level statistics collected for the test like IOPS, MBPS and latency gathered.

Note: The result will represent disks as a whole, the max/min of all disks listed on fs.lun file.

Extra Examples:
    Simulate Read-Only test no writes, mix small & large random and sequential reads:
    # ./orion_linux_x86-64 -run advanced -testname fs -write 0 -simulate concat -matrix detailed -num_disks 3

    Simulate Write-Only test no Reads, mix small & large random and sequential writes:
    # ./orion_linux_x86-64 -run advanced -testname fs -write 100 -simulate concat -matrix detailed -num_disks 3

    In case you're getting ASYNC IO errors try this:
    # ./orion_linux_x86-64 -run advanced -testname fs -write 100 -simulate concat

For more details on Orion tool you can check the following links:
Oracle Documentation: http://download.oracle.com/otn/utilities_drivers/orion/Orion_Users_Guide.pdf?AuthParam=1406916708_e12a12c057ddf82cf5e3c56b651ee8df
For Deep explanation: http://www.slideshare.net/alexgorbachev/benchmarking-oracle-io-performance-with-orion-by-alex-gorbachev



Thursday, 17 July 2014

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

While I was taking an incremental RMAN backup I've faced this error:

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

The issue: that I ran the RMAN script that contains "host" commands from an old RMAN session, that session been opened while I was stepping on a directory been deleted before running the RMAN script.

The fix: is to close that old RMAN session and step under an exist directory and open a new RMAN session.