Showing posts with label Performance Tuning. Show all posts
Showing posts with label Performance Tuning. Show all posts

Thursday, 21 March 2019

All About Statspack

Are DBAs still using Statspack instead of AWR? Yes
Why?

- It's the only option for Standby databases as you cannot genereate an AWR report on the Standby DB because the data will represent the Primary DB not the DR [Technical details are mentioned in the second half of this article]

- AWR is costly, if you use it one time you need to pay for the Diagnostic Pack license which is not cheap for small and mid-size companies.
https://www.oracle.com/assets/technology-price-list-070617.pdf

- Statspack is 80% of the AWR except it doesn't include the information about clusterware and other minor components, but still it can do 80% of the job for you.

- Although Statspack doesn't have the fancy HTML look of the AWR, but this shouldn't make you unsatisfied if you are an expert DBA 😅

- AWR is not available in Standard Edition, yes you can generate an AWR report on Standard Edition but it will be showing a useless report with zero values, then Oracle Licensing Audit Department will ask you to upgrade to Enterprise Edition because you tried to generate an AWR report on a SE, it sounds silly but this actually happened to me!

Conclusion, if you will buy the Diagnostic Pack license mainly for AWRs, I believe you will be paying too much just for the 20% of the information which is only available in AWR reports.

Now let's jump to the technical part which suppose to cover most of technical challenges when you deal with Statspack.


Statspack Installation:

Prerequisites:
- Although you can use SYSAUX tablespace to store statspack data, it's recommended to create a separate tablespace dedicated only for STATSPACK, e.g. TOOLS tablespace.
- For better performance analysis set the dynamic initialization parameter "timed_statistics" to true [Default].
ALTER SYSTEM SET timed_statistics=TRUE;

STATSPACK Installation Steps:

SQL>  connect / as sysdba
SQL>  @?/rdbms/admin/spcreate
PERFSTAT password:
default tablespace: assign "TOOLS" as a default tablespace for PERFSTAT
temporary tablespace:

SQL>  connect / as sysdba
SQL>  grant create job to PERFSTAT;

To take a snapshot manually: [For reference]
SQL>  connect perfstat/perf#123SQL>  execute statspack.snap;

Disable AWR Auto Snapshots: [Optional]
SQL> execute dbms_workload_repository.modify_snapshot_settings (   interval => 0,   retention => 43200);
Create a job to take a snapshot every 30min:
SQL>
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'STATSPACK_SNAP',
job_type => 'STORED_PROCEDURE',
job_action => 'statspack.snap',
repeat_interval => 'FREQ=HOURLY; BYHOUR=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23; BYMINUTE=4,34',
auto_drop => FALSE,
enabled => TRUE,
comments => 'Statspack automated snap');
END;
/

Create a job to Purge snapshotss older than one month:
-- Create the procedure:

SQL> Create or replace PROCEDURE purge_stats IS   loop_counter number;
   v_num number;
   tot_num number;
   CURSOR get_num IS
   SELECT round((count(*)/50),0) from stats$snapshot where snap_time < sysdate-31;
begin
   open get_num;
   FETCH get_num INTO tot_num;
   close get_num;
   tot_num := tot_num + 1;
   for loop_counter in 1 .. tot_num loop
       delete from stats$snapshot where snap_time < sysdate-31 and rownum <51;
       commit;
   end loop;
end;
/

-- Create the job:
SQL> BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'PURGE_STATSPACK_SNAP',
job_type => 'STORED_PROCEDURE',
job_action => 'purge_stats',
repeat_interval => 'FREQ=DAILY; BYHOUR=6; BYMINUTE=13',
auto_drop => FALSE,
enabled => TRUE,
comments => 'Purge Statspack snaps older than 31 days');
END;
/

Note: The time required to create a snapshot is dependent on the shared_pool_size,the larger the shared pool, the longer it will take to create a snapshot.

Generate a STATSPACK report:
SQL>  connect perfstat/perf#123
SQL>  @?/rdbms/admin/spreport

- You have to provide at least 2 snapshots IDs where the database didn't restart in between.
- It will ask for the report name.
- The report ill be created under the current working directory.

Advanced Tasks:


Mark a snapshot as a baseline

You can mark group of snapshots as a baseline represent your DB performance.
The snapshots inside the baseline will be kept and will not be deleted even by STATSPACK.PURGE.

Examples:

--This will make the snapshots from snap_id 45 to snap_id 50 in a baseline.
SQL> exec statspack.make_baseline (i_begin_snap => 45,i_end_snap => 50);

--This will make only snapshots 45,50 as a baseline.
SQL> exec statspack.make_baseline (45,50,false);

--This will take the snapshots between 9am to 12pm @31Aug.
SQL> exec statspack.make_baseline (
todate('31-AUG-2018 09:00','DD-MON-YYY HH24:MI'),
todate('31-AUG-2018 12:00','DD-MON-YYY HH24:MI');

--This will remove the baseline mark on the snapshots between 9am to 12pm and make them ordinary snapshots can be deleted:
SQL> exec statspack.clear_baseline (
todate('31-AUG-2018 09:00','DD-MON-YYY HH24:MI'),
todate('31-AUG-2018 12:00','DD-MON-YYY HH24:MI');


Manually Purge specific snapshots:
SQL>  connect perfstat/perf#123
SQL>  set transaction use rollback segment rbig;
SQL>  @?/rdbms/admin/sppurge

It will ask for the snapshot ID to begin with and the number of snapshots it will be purged after:
SQL>  connect perfstat/perf#123
SQL>  @?/rdbms/admin/sptrunc

Remove STATSPACK:
[AS SYSDBA]
SQL> @?/rdbms/admin/spdrop.sql
SQL> @?/rdbms/admin/sbdrop


STATSPACK for Standby DB:

As you know, you cannot generate neither an AWR nor a STATSPACK report directly on the standby database as the available statistics on the standby DB are a replica from the primary DB. Oracle recommends to install STATSPACK for standby on the primary DB itself, where performance stats will be gathered from the standby DB via DB link and will be stored on the Primary DB.

To install STATSPACK for a StandbyDB [orcldr]:
[FROM the Primary DB]
[AS SYSDBA]
SQL> @?/rdbms/admin/sbcreate
-- Provide a password for the stdbyuser user that will be created.
Default Tablespace: TOOLS
Temp Tablespace: TEMP

Do you want to continue (y/n) ?
Enter value for key: y
Enter value for tns_alias: orcldr1    -- STANDBY Instance TNS alias
Enter value for perfstat_password: -- Same passowrd

In case your Standby DB is a RAC setup, add the second STANDBY Instance2:
[FROM Primary DB]
SQL> connect stdbyperf/perf#123
SQL> @?/rdbms/admin/sbaddins
Enter value for tns_alias: orcldr2 -- STANDBY Instance2 TNS alias
Enter value for perfstat_password: -- Same passowrd

Create a job to automatically gather STATSPACK Snapshots from the Standby:
[FROM Primary DB]
SQL> conn / as sysdbaSQL> grant create job to stdbyperf;
SQL> connect stdbyperf/perf#123

SQL> BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'STATSPACK_SNAP_STANDBY_NODE1',
job_type => 'STORED_PROCEDURE',
job_action => 'STATSPACK_ORCLDR_ORCLDR1.snap',
repeat_interval => 'FREQ=HOURLY; BYHOUR=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23; BYMINUTE=4,34',
auto_drop => FALSE,
enabled => TRUE,
comments => 'Statspack automated snap for ORCLDR1');
END;
/

In case your Standby DB is a RAC setup, you have to create another job to create Snapshots for the second STANDBY Instance2:
[FROM Primary DB]
SQL> BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'STATSPACK_SNAP_STANDBY_NODE2',
job_type => 'STORED_PROCEDURE',
job_action => 'STATSPACK_ORCLDR_ORCLDR2.snap',
repeat_interval => 'FREQ=HOURLY; BYHOUR=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23; BYMINUTE=4,34',
auto_drop => FALSE,
enabled => TRUE,
comments => 'Statspack automated snap for ORCLDR2');
END;
/

Create a job to automatically purge old Snapshots:[Older then a month]

-- Create Purge Procedure to be called from purge job:
[FROM Primary DB]
SQL> Create or replace PROCEDURE purge_stats IS
   loop_counter number;
   v_num number;
   tot_num number;
   CURSOR get_num IS
   SELECT round((count(*)/50),0) from stats$snapshot where snap_time < sysdate-31;
begin
   open get_num;
   FETCH get_num INTO tot_num;
   close get_num;
   tot_num := tot_num + 1;
   for loop_counter in 1 .. tot_num loop
       delete from stats$snapshot where snap_time < sysdate-31 and rownum <51;
       commit;
   end loop;
end;
/

-- Create the job:
[FROM Primary DB]
SQL> BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'PURGE_STATSPACK_SNAP_STNDBY',
job_type => 'STORED_PROCEDURE',
job_action => 'purge_stats',
repeat_interval => 'FREQ=DAILY; BYHOUR=6; BYMINUTE=13',
auto_drop => FALSE,
enabled => TRUE,
comments => 'Purge Statspack snaps older than 31 days');
END;
/

Create a snapshot manually: [For reference]
[FROM Primary DB]
SQL> connect stdbyperf/perf#123
SQL> exec statspack_ORCLDR_ORCLDR1.snap --Create Snapshot for STANDBY Instance1
SQL> exec statspack_ORCLDR_ORCLDR2.snap --Create Snapshot for STANDBY Instance2

Generate a STATSPACK Report on the Standby DB:
[FROM Primary DB]
SQL> connect stdbyperf/perf#123
     @?/rdbms/admin/sbreport

To Purge Snapshots manually: [For reference][FROM Primary DB]
SQL> connect stdbyperf/perf#123
SQL>@?/rdbms/admin/sbpurge

Delete an instance from the STATSPACK configuration:
[FROM Primary DB]
SQL> connect stdbyperf/perf#123
SQL> @?/rdbms/admin/sbdelins

Tuesday, 25 September 2018

How To Find The Unused Indexes In Oracle Database

Getting rid of unused indexes on the database will not only free up space but it will boost the DML performance as well. But if you mistakenly get rid of a used index this may impact the performance. So let's go through the following steps which will let you do this task in a safe way.

Start With The Biggest Indexes In the Database:
I usually start this job by monitoring the biggest indexes in the database which can release much space and relief much of DML overhead on their underlying tables if those indexes got dropped.
This query will show the biggest 100 indexes:

set pages 200 lines 168
col owner for a40
col SEGMENT_NAME for a45
col TABLESPACE_NAME for a35
Select * from (select OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME,BYTES/1024/1024 SIZE_MB from dba_segments where SEGMENT_TYPE like 'INDEX%' order by 5 desc)where rownum <101 order by 5;


In order to get Oracle to monitor those indexes to figure out if they are used or not we should use this command:

ALTER INDEX <OWNER>.<INDEX_NAME> MONITORING USAGE;

Monitor The Biggest Indexes:
Now Let's enable monitoring on the biggest 100 indexes on the database using the output from this command:

set pages 200 lines 168
col Enable_Monitoring_Command for a100
Select * from (select 'ALTER INDEX '||OWNER||'.'||SEGMENT_NAME||' MONITORING USAGE;' Enable_Monitoring_Command,BYTES/1024/1024 SIZE_MB  from dba_segments where SEGMENT_TYPE like 'INDEX%' order by 2 desc) where rownum <101 order by 2 desc;

Executing the commands generated under "Enable_Monitoring_Command" will put the concerned indexes under monitoring.

Check the Monitored Indexes:
As a rule of thumb, you should keep those indexes under monitoring for the longest period as possible to avoid dropping an index that is being used somewhere in a weekly/monthly report.
From time to time, you have to check the monitored indexes that are reported as "USED" and remove them from the monitoring:

set linesize 200 pages 1000
col Index_NAME for a40
col TABLE_NAME for a40
col Monitored  for a9
col Used for a4
        select io.name Index_NAME, t.name TABLE_NAME,decode(bitand(i.flags, 65536),0,'NO','YES') Monitored,
        decode(bitand(ou.flags, 1),0,'NO','YES') USED,ou.start_monitoring,ou.end_monitoring
        from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou
where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#
order by 3 desc,4;

To Un-Monitor indexes reported as "USED", use this command:

select 'ALTER INDEX <Index_owner>.'||io.name||' NOMONITORING USAGE;' INDEXES_IN_USE
from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou
where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#
and decode(bitand(i.flags, 65536),0,'NO','YES')='YES' and decode(bitand(ou.flags, 1),0,'NO','YES')='YES' order by 1;

After keeping the rest of indexes under monitoring for a sufficent time (at least one month), do the following checks:

Make sure the "UNUSED" indexes are not associated with constraints:

SELECT OWNER,TABLE_NAME,CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM DBA_CONSTRAINTS
WHERE INDEX_NAME IN (select io.name from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou
where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#
and decode(bitand(i.flags, 65536),0,'NO','YES')='YES' and decode(bitand(ou.flags, 1),0,'NO','YES')='NO');

Check that UNUSED Indexes are not appearing in Exec Plans:
Make sure the "UNUSED" indexes are not used in any recent/old SQL Statements Execution plans:

select max(TIMESTAMP),object_name from v$SQL_PLAN
where OBJECT_NAME in ('<INDEX_NAME>')
group by object_name;

select max(TIMESTAMP),object_name from DBA_HIST_SQL_PLAN
where OBJECT_NAME in ('<INDEX_NAME>')
group by object_name;

Make sure the "UNUSED" indexes are not used in any recent SQL Statements Execution plans:

Set the UNUSED Index into INVISIBLE mode for sometime:
After making sure that the "UNUSED" indexes are not associated with any constraints and they are not used anymore in execution plans, you can put them into the INVISIBLE mode for some time:

 select 'ALTER INDEX P_FZ.'||io.name||' INVISIBLE;' SET_INVISIBLE
 from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou
where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#
and decode(bitand(i.flags, 65536),0,'NO','YES')='YES' and decode(bitand(ou.flags, 1),0,'NO','YES')='NO';

Setting indexes into INVISIBLE mode will make them invisible to the application and thus will not be used by any query, This is much safer and faster than dropping the index, in case you came to know that those indexes are still being used, you can easily set them back to VISIBLE mode rather than re-create them from scratch in case you dropped them. 

Lastly, Drop the UNUSED Indexes:
Once we are definitely sure that the "UNUSED" indexes are not used, then we can drop them.
This command will drop all UNUSED monitored indexes:

select 'DROP INDEX P_FZ.'||io.name||' ;' DROP_INDEX
from sys.obj$ io,sys.obj$ t,sys.ind$ i,sys.object_usage ou
where i.obj# = ou.obj# and io.obj# = ou.obj# and t.obj# = i.bo#
and decode(bitand(i.flags, 65536),0,'NO','YES')='YES' and decode(bitand(ou.flags,1),0,'NO','YES')='NO';