Saturday, October 13, 2018

Transfer oracle sql profiles between two databases


Transfer oracle sql profiles between two databases

Summary 
Here is a simple guide to transfer sql profiles from one database to another. For better understanding assume that you want to transfer one sql profile from the 'source' database to the 'destination' database. 

1. Connect to the source database as sysdba and grant the required privileges to the user which will own the staging table. 


$ sqlplus "/ as sysdba"

2. Create the staging table.

BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => 'SQL_PROFILES_STGTAB');
END;
/

3. Check the existing sql profiles at the source db and copy the desired to the staging table
SELECT * FROM dba_sql_profiles ORDER BY created DESC;

BEGIN
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (profile_name => 'SYS_SQLPROF_014c5ae7b1c80001', staging_table_name => 'SQL_PROFILES_STGTAB');
END;
/

SELECT * FROM SQL_PROFILES_STGTAB;

4. Copy the staging table SQL_PROFILES_STGTAB from the source db to the destination db.  ( export and import)
5. Grant again at the destination db the required privilege.


6. Add the sql profiles from the staging table to the destination db.

BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(REPLACE => TRUE, staging_table_name => 'SQL_PROFILES_STGTAB');
END;
/


SQL - force good plan

run the report find the hash value

SET ECHO OFF
SET MARKUP HTML ON SPOOL ON
SPOOL planhash.html
set lines 155
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
break on plan_hash_value on startup_time skip 1
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id = 'gd1by1cgjj853'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1, 2, 3;
SPOOL OFF
SET MARKUP HTML OFF

SET ECHO ON

Step -1- Create SQL Tuning Set

BEGIN  
DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'STS_TEST',description => 'SQL Tuning Set for loading plan into SQL Plan Baseline'); 
END; 
/

Step -2- Load the STS from AWR snap id 

DECLARE  cur sys_refcursor; 
BEGIN  
OPEN cur FOR    
     SELECT VALUE(P)     
     FROM TABLE(dbms_sqltune.select_workload_repository(begin_snap=>12739, end_snap=>12740,basic_filter=>'sql_id =''gd1by1cgjj853''',attribute_list=>'ALL')) p;
      DBMS_SQLTUNE.LOAD_SQLSET( sqlset_name=> 'STS_TEST', populate_cursor=>cur);  
 CLOSE cur;
END;



SQL>


Step -3


SELECT * FROM dba_sql_plan_baselines ;

Step -4  - force hash value to use the SQL 

DECLARE 
my_plans pls_integer; 
BEGIN  
  my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( 
  sqlset_name => 'STS_TEST',  
   basic_filter=>'plan_hash_value = ''494989362'''); 
END; 


Step -5


-- List out SQL Tuning Set contents to check we got what we wanted
SELECT
  first_load_time          ,
  executions as execs              ,
  parsing_schema_name      ,
  elapsed_time  / 1000000 as elapsed_time_secs  ,
  cpu_time / 1000000 as cpu_time_secs           ,
  buffer_gets              ,
  disk_reads               ,
  direct_writes            ,
  rows_processed           ,
  fetches                  ,
  optimizer_cost           ,
  sql_plan                ,
 plan_hash_value          ,
  sql_id                   ,
  sql_text
   FROM TABLE(DBMS_SQLTUNE.SELECT_SQLSET(sqlset_name =>'STS_TEST'));

Drop the STS

BEGIN
  DBMS_SQLTUNE.DROP_SQLSET(
    sqlset_name => 'STS_TEST');
END;


   
   
To display the contents of an STS:

SQL>

COLUMN SQL_TEXT FORMAT a30   
COLUMN SCH FORMAT a3
COLUMN ELAPSED FORMAT 999999999

SELECT SQL_ID, PARSING_SCHEMA_NAME AS "SCH", SQL_TEXT, 
       ELAPSED_TIME AS "ELAPSED", BUFFER_GETS
FROM   TABLE( DBMS_SQLTUNE.SELECT_SQLSET( 'STS_TEST' ) );


SELECT name, owner FROM dba_sqlset WHERE name='STS_TEST'


You can see the SQL statements loaded in the SQL Tuning Set once it's loaded, by using the following query:

SELECT SQL_TEXT FROM TABLE(DBMS_SQLTUNE.SELECT_SQLSET('STS_TMSUAT') );



SQL> select SQL_TEXT from DBA_SQLSET_STATEMENTS where SQLSET_NAME='STS_TEST';

no rows selected

SQL>




select * from table(dbms_xplan.display_sqlset('STS_TEST','gd1by1cgjj853'));



SQL> select instance_name from v$instance;




SQL> BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'STS_TEST',description => 'SQL Tuning Set for loading plan into SQL Plan Baseline');
END;
/  2    3    4

PL/SQL procedure successfully completed.

SQL> DECLARE  cur sys_refcursor;
  2  BEGIN
  3  OPEN cur FOR
  4       SELECT VALUE(P)
  5       FROM TABLE(dbms_sqltune.select_workload_repository(begin_snap=>12739, end_snap=>12740,basic_filter=>'sql_id =''gd1by1cgjj853''',attribute_list=>'ALL')) p;
  6        DBMS_SQLTUNE.LOAD_SQLSET( sqlset_name=> 'STS_TEST', populate_cursor=>cur);
  7   CLOSE cur;
  8  END;
  9  /

PL/SQL procedure successfully completed.


SQL> select name from v$database;



SQL> DECLARE
  2  my_plans pls_integer;
  3  BEGIN
  4    my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
  5    sqlset_name => 'STS_TEST',
  6     basic_filter=>'plan_hash_value = ''494989362''');
  7  END;
  8  /

PL/SQL procedure successfully completed.

SQL>


SQL> select SQL_TEXT from DBA_SQLSET_STATEMENTS where SQLSET_NAME='STS_TEST';



FYI

Correcting Optimizer Cost Estimates to Encourage Good Execution Plans Using the COE XFR SQL Profile Script (Doc ID 1955195.1)

SQL> START coe_xfr_sql_profile.sql <sql_id> or <plan hash value for good plan>


Unzip sqlt.zip and navigate to the sqlt directory
Navigate to the subdirectory utl
Find the SQL_ID and the Plan Hash Value (PHV) for the plan you want to base the profile on. You can get the SQL_ID from SQLT in the "SQL Identification" section and the PHV from the "Execution Plans" section:

About Me

My photo
Sydney, NSW, Australia
An experienced IT professional (14+ years) worked in multiple industries such as consulting, education, financial, retail sectors. Carries good work experience in relational database design, analysis, modeling, development, administration, implementation, trouble shooting, support etc. Experienced in Oracle/SQL Server/MySQL DBA involving setup, configuration, tuning, backups, disaster recovery, high availability Oracle 11g/12C/19C RAC clusters; SQL server 2008/2012/2016 clusters, Oracle Engineered Systems such as EXADATA, ODA and Oracle and Azure Cloud. Performed Software Installations, Migrations, Database Capacity Planning, Automation of backup implementation, Cloud migration Tuning Oracle in windows/Unix platforms. In addition, experienced in UNIX administration, Shell scripting. PowerShell scripting A team player with communication skills.