Tuesday, June 4, 2019

EM Query




  Snapshots (take snaps before & after test): – exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT;   
AWR Report (@?/rdbms/admin/awrrpt.sql) – Workload Repository Report – Top events to start   
ADDM report (@?/rdbms/admin/addmrpt.sql) – Oracle provided recommendations for further analysis 
  ASH Report (use particularly for concurrency issues) – @?/rdbms/admin/ashrpt.sql  
 SQR Report (With sqlid get the explain plan) – @?/rdbms/admin/awrsqrpt.sql

tune sql

@$ORACLE_HOME/rdbms/admin/sqltrpt.sql



set linesize 200
col BEGIN_INTERVAL_TIME format a70
select * from (select snap_id,begin_interval_time from dba_hist_snapshot order by begin_interval_time desc) where rownum < 3;
---------------------------------------------
Set pages 1000
Set lines 75
Select a.execution_end, b.type, b.impact, d.rank, d.type, 
'Message           : '||b.message MESSAGE,
'Command To correct: '||c.command COMMAND,
'Action Message    : '||c.message ACTION_MESSAGE
From dba_advisor_tasks a, dba_advisor_findings b,
Dba_advisor_actions c, dba_advisor_recommendations d
Where a.owner=b.owner and a.task_id=b.task_id
And b.task_id=d.task_id and b.finding_id=d.finding_id
And a.task_id=c.task_id and d.rec_id=c.rec_Id
And a.task_name like 'ADDM%' and a.status='COMPLETED'
and a.description like '%4782%'
Order by b.impact, d.rank;

-----------------------------------------------------------

export ORACLE_SID=$1
export ORAENV_ASK=NO
. oraenv
sqlplus -S / as sysdba <<oogy
set echo off
set lines 100
set pages 200
set trimspool on
set termout off
set feedback off

column dcol new_value mydate noprint
select to_char(sysdate,'YYMMDD') dcol from dual;

spool /home/oraprd/scripts/dbreport_$1_&mydate..txt

ttitle 'Average Active Sessions in the last week: Instance $1'

column sample_hour format a16
select
   to_char(round(sub1.sample_time, 'HH24'), 'YYYY-MM-DD HH24:MI') as sample_hour
,    round(avg(sub1.on_cpu),1) as cpu_avg
,    round(avg(sub1.waiting),1) as wait_avg
,    round(avg(sub1.active_sessions),1) as act_avg
,    round( (variance(sub1.active_sessions)/avg(sub1.active_sessions)),1) as act_var_mean
from
   ( -- sub1: one row per second, the resolution of SAMPLE_TIME
     select
        sample_id,
        sample_time,
        sum(decode(session_state, 'ON CPU', 1, 0))  as on_cpu,
        sum(decode(session_state, 'WAITING', 1, 0)) as waiting,
        count(*) as active_sessions
     from
        dba_hist_active_sess_history
     where
        sample_time > sysdate - 7
     group by
        sample_id,
        sample_time
   ) sub1
group by
   round(sub1.sample_time, 'HH24')
order by
   round(sub1.sample_time, 'HH24');


ttitle 'Most expensive queries in the last week: Instance $1'
-- gets most expensive queries
-- (by time spent, change "order by" to use another metric)
-- after a specific date
select
   sub.sql_id,
   sub.seconds_used,
   sub.executions,
   sub.gets
from
   ( -- sub to sort before rownum
     select
        sql_id,
        round(sum(elapsed_time_delta)/1000000) as seconds_used,
        sum(executions_delta) as executions,
        sum(buffer_gets_delta) as gets
     from
        dba_hist_snapshot natural join dba_hist_sqlstat
     where
        begin_interval_time > sysdate - 7
     group by
        sql_id
     order by
        2 desc
   ) sub
where
   rownum < 30
;


set long 32768
ttitle 'Text for most expensive SQL in the last week: Instance $1'
select sql_text
from dba_hist_sqltext
where sql_id =
(
select sub.sql_id
from
   ( -- sub to sort before rownum
     select
        sql_id,
        round(sum(elapsed_time_delta)/1000000) as seconds_since_date,
        sum(executions_delta) as execs_since_date,
        sum(buffer_gets_delta) as gets_since_date
     from
        dba_hist_snapshot natural join dba_hist_sqlstat
     where
        begin_interval_time > sysdate - 7
     group by
        sql_id
     order by
        2 desc
   ) sub
where
   rownum = 1
);

spool off;
exit
oogy

Collect statistics on fixed objects using DBMS_STATS.GATHER_FIXED_OBJECTS_STATS.

# Fixed objects stats 

1. Check fixed object stats 

             select count(1) from tab_stats$; 

2. Gather fixed objects stats

           exec dbms_stats.gather_fixed_objects_stats;

3. Check fixed object stats 
              select count(1) from tab_stats$; 


# Gather system stat  

execute dbms_stats.gather_system_stats('Start');
-- few hour delay during high workload
execute dbms_stats.gather_system_stats('Stop');

# Flush shared pool

Alter system flush shared_pool;

 Roll back if the change is not successful:


1. exec dbms_stats.delete_fixed_objects_stats();
2. exec DBMS_STATS.DELETE_SYSTEM_STATS;

---------------------------------------------------------------------------
http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm

The dbms_stats.gather_system_stats procedure is especially useful for multi-mode Oracle shops that run OLTP during the day and DSS at night.  You invoke the dbms_stats.gather_system_stats procedure as an elapsed time capture, making sure to collect the statistics during a representative heavy workload:

execute dbms_stats.gather_system_stats('Start');
-- one hour delay during high workload
execute dbms_stats.gather_system_stats('Stop');

The data collection mechanism of the dbms_stats.gather_system_stats procedure works in a similar fashion to my script that measures I/O times to optimizer the optimizer_index_cost_adj parameter.  The dbms_stats.gather_system_stats also related to the under-documented  _optimizer_cost_model parameter and your db_file_multiblock_read_count setting.

The output from dbms_stats.gather_system_stats is stored in the aux_stats$ table and you can query it as follows:

select pname, pval1 from sys.aux_stats$;

Here are the data items collected by dbms_stats.gather_system_stats:

No Workload (NW) stats:

CPUSPEEDNW - CPU speed
IOSEEKTIM - The I/O seek time in milliseconds
IOTFRSPEED - I/O transfer speed in milliseconds
Workload-related stats:

SREADTIM  - Single block read time in milliseconds
MREADTIM - Multiblock read time in ms
CPUSPEED - CPU speed
MBRC - Average blocks read per multiblock read (see db_file_multiblock_read_count)
MAXTHR - Maximum I/O throughput (for OPQ only)
SLAVETHR - OPQ Factotum (slave) throughput (OPQ only)   
The dbms_stats.gather_system_stats procedure is very similar to my script for setting optimizer_index_cost_adj, where I compare the relative costs of sequential and scattered read times:

select 
   sum(a.time_waited_micro)/sum(a.total_waits)/1000000 c1, 
   sum(b.time_waited_micro)/sum(b.total_waits)/1000000 c2,
   (
      sum(a.total_waits) / 
      sum(a.total_waits + b.total_waits)
   ) * 100 c3,
   (
      sum(b.total_waits) / 
      sum(a.total_waits + b.total_waits)
   ) * 100 c4,
  (
      sum(b.time_waited_micro) /
      sum(b.total_waits)) / 
      (sum(a.time_waited_micro)/sum(a.total_waits)
   ) * 100 c5 
from 
   dba_hist_system_event a, 
   dba_hist_system_event b
where 
   a.snap_id = b.snap_id
and 
   a.event_name = 'db file scattered read'
and 
   b.event_name = 'db file sequential read';


SELECT module, sql_text, username, disk_reads_per_exec, buffer_gets,   
       disk_reads, parse_calls, sorts, executions, rows_processed, hit_ratio,   
       first_load_time, last_load_time, sharable_mem, persistent_mem,  runtime_mem, cpu_time, cpu_time/1000 as cputime_per_execution,   
       elapsed_time
FROM    
    (SELECT module, sql_text ,     
            u.username,
            round((s.disk_reads/decode(s.executions,0,1, s.executions)),2) disk_reads_per_exec,
            s.disk_reads, s.buffer_gets, s.parse_calls, s.sorts, s.executions ,
            s.rows_processed,     
            100 - round(100 *  s.disk_reads/greatest(s.buffer_gets,1),2) hit_ratio,
            s.first_load_time ,  s.last_load_time,   sharable_mem ,     persistent_mem ,     runtime_mem, 
            cpu_time,     elapsed_time,     address,     hash_value   
     FROM   sys.v_$sql s, sys.all_users u    
     WHERE  s.parsing_user_id=u.user_id
     --and module = 'JDBC Thin Client'
        and UPPER(u.username) like '%E%' 


set lines 1000 pages 50000
set markup html on
spool active.html



SELECT *
FROM   (SELECT Substr(a.sql_text,1,50) sql_text,
               Trunc(a.disk_reads/Decode(a.executions,0,1,a.executions)) reads_per_execution, 
               a.buffer_gets, 
               a.disk_reads, 
               a.executions, 
               a.sorts,
               a.address
        FROM   v$sqlarea a
        ORDER BY 2 DESC)
WHERE  rownum <= &&1;

SELECT module, sql_text, username, disk_reads_per_exec, buffer_gets, 
 disk_reads, parse_calls, sorts, executions, rows_processed, hit_ratio, 
 first_load_time, last_load_time, sharable_mem, persistent_mem, runtime_mem, cpu_time, cpu_time/1000 as cputime_per_execution, 
 elapsed_time 
 FROM 
 (SELECT module, sql_text , 
 u.username, 
 round((s.disk_reads/decode(s.executions,0,1, s.executions)),2) disk_reads_per_exec, 
 s.disk_reads, s.buffer_gets, s.parse_calls, s.sorts, s.executions , 
 s.rows_processed, 
 100 - round(100 * s.disk_reads/greatest(s.buffer_gets,1),2) hit_ratio, 
 s.first_load_time , s.last_load_time, sharable_mem , persistent_mem , runtime_mem, 
 cpu_time, elapsed_time, address, hash_value 
 FROM sys.v_$sql s, sys.all_users u 
 WHERE s.parsing_user_id=u.user_id 
 --and module = 'JDBC Thin Client' 
 and UPPER(u.username) like '%E%' 
 ORDER BY cpu_time desc, last_load_time desc) WHERE rownum <= 50; 

--------------------------------------------------------------------------------------


Collect statistics on fixed objects using DBMS_STATS.GATHER_FIXED_OBJECTS_STATS. 

 exec dbms_stats.gather_fixed_objects_stats;

 start the system start gathering

 EXEC DBMS_STATS.GATHER_SYSTEM_STATS('START');

 select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KGLDP';

 The stop the gathering after  2 to 3 days
  
EXEC DBMS_STATS.GATHER_SYSTEM_STATS('STOP');

set lines 180 pages 1000
col client_name for a40
col attributes for a60
select client_name, status,attributes,service_name from dba_autotask_client
/

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'auto space advisor',
    operation   => NULL,
    window_name => NULL);
END;
/

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;
/

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'auto optimizer stats collection',
    operation   => NULL,
    window_name => NULL);
END;
/

select client_name, status,attributes,service_name from dba_autotask_client
/

## Enabling
BEGIN
dbms_auto_task_admin.enable(client_name => 'sql tuning advisor', operation => NULL, window_name => NULL);
END;
/

Monday, June 3, 2019

Clone RDBMS RAC binaries in 12c in silent mode - one node

Clone RDBMS RAC binaries in 12c in silent mode

Copy binaries from  /h01/app/oracle/db/12.1.0.2_03  to /h01/app/oracle/db/12.1.0.2_02

 /h01/app/oracle/db/12.1.0.2_03 > cp -Rp /h01/app/oracle/db/12.1.0.2_02


export ORACLE_HOME=/h01/app/oracle/db/12.1.0.2_02
export PATH=$ORACLE_HOME/bin:$PATH

2. Run the clone command in silent mode on both the DB hosts one after the other as “oracle” user

perl  /h01/app/oracle/db/12.1.0.2_02/clone/bin/clone.pl -silent ORACLE_HOME="/h01/app/oracle/db/12.1.0.2_02" ORACLE_HOME_NAME="OraDB12Home2" ORACLE_BASE="/h01/app" '-O"CLUSTER_NODES={raj}"' '-O"LOCAL_NODE=raj"'

oracle@raj (SID:DEMO) /h01/app/oracle/db/12.1.0.2_02 $ export ORACLE_HOME=/h01/app/oracle/db/12.1.0.2_02
oracle@raj (SID:DEMO) /h01/app/oracle/db/12.1.0.2_02 $ export PATH=$ORACLE_HOME/bin:$PATH
oracle@raj (SID:DEMO) /h01/app/oracle/db/12.1.0.2_02 $ DES={raj}"' '-O"LOCAL_NODE=raj"'                                          <
./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/h01/app/oracle/db/12.1.0.2_02" "ORACLE_HOME_NAME=OraDB12Home2" "ORACLE_BASE=/h01/app" "CLUSTER_NODES={raj}" "LOCAL_NODE=raj" -silent -paramFile /h01/app/oracle/db/12.1.0.2_02/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 4023 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 32767 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2019-06-04_02-18-54PM. Please wait ...You can find the log of this install session at:
 /c01/app/oraInventory/logs/cloneActions2019-06-04_02-18-54PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraDB12Home2 was successful.
Please check '/c01/app/oraInventory/logs/cloneActions2019-06-04_02-18-54PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /h01/app/oracle/db/12.1.0.2_02/root.sh

Execute /h01/app/oracle/db/12.1.0.2_02/root.sh on the following nodes:
[raj]


..................................................   100% Done.
oracle@raj (SID:DEMO) /h01/app/oracle/db/12.1.0.2_02 $ grep $ORACLE_HOME /c01/app/oraInventory/ContentsXML/inventory.xml
<HOME NAME="OraDB12Home2" LOC="/h01/app/oracle/db/12.1.0.2_02" TYPE="O" IDX="27">


3. Run the root.sh on both the nodes

[root@raj ~]# /h01/app/oracle/db/12.1.0.2_02/root.sh

4. You can confirm this by checking the inventory file

grep $ORACLE_HOME /c01/app/oraInventory/ContentsXML/inventory.xml

About Me

My photo
Sydney, NSW, Australia
An experienced IT professional (14+ years) worked in multiple industries such as consulting, education, financial, retail sectors. Carries good work experience in relational database design, analysis, modeling, development, administration, implementation, trouble shooting, support etc. Experienced in Oracle/SQL Server/MySQL DBA involving setup, configuration, tuning, backups, disaster recovery, high availability Oracle 11g/12C/19C RAC clusters; SQL server 2008/2012/2016 clusters, Oracle Engineered Systems such as EXADATA, ODA and Oracle and Azure Cloud. Performed Software Installations, Migrations, Database Capacity Planning, Automation of backup implementation, Cloud migration Tuning Oracle in windows/Unix platforms. In addition, experienced in UNIX administration, Shell scripting. PowerShell scripting A team player with communication skills.