Yes, the Excel sheet you provided is an excellent starting point for
identifying high-impact SQL queries. Here's a structured approach to
analyze and prioritize queries effectively:
Step 1: Key Metrics to Focus On
Primary Metric: Elapsed Time
Why?: Elapsed Time represents the total duration the query took,
encompassing CPU time, I/O waits, and other overheads.
Approach:
o Focus on queries with the highest Elapsed Time to address
the biggest bottlenecks.
Secondary Metric: Elapsed Time per Execution
Why?: High Elapsed Time/Exec indicates that each execution of the
query is expensive.
Approach:
o Prioritize queries with high Elapsed Time/Exec, as they likely
represent inefficient SQL plans.
Tertiary Metric: Buffer Gets
Why?: Buffer Gets measures logical I/O, which shows how many
database blocks the query accessed.
Approach:
o High Buffer Gets often correlate with inefficient execution
plans, such as table scans or poorly filtered joins.
Additional Metrics to Consider
Executions: A query with moderate Elapsed Time but millions of
executions may still be a critical bottleneck.
CPU Time: Helps identify CPU-bound queries.
Direct Writes/User I/O Waits: Indicates heavy disk activity and
potential I/O bottlenecks.
Step 2: Prioritization
High Impact Queries
Sort the Excel sheet by Elapsed Time (descending) to identify the most
time-consuming queries.
Balance with Frequency
Look at Elapsed Time/Exec alongside Executions:
High Elapsed Time + High Executions = Major Bottleneck (likely
application-driven inefficiencies).
High Elapsed Time/Exec + Low Executions = Likely a poorly
optimized heavy query.
Step 3: Analyze Execution Plans
For each SQL ID identified from Step 2:
1. Retrieve Execution Plans:
2. SELECT *
3. FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('<SQL_ID>', NULL,
'ALLSTATS LAST'));
4. Key Elements to Analyze:
o Full Table Scans: Look for TABLE ACCESS FULL.
o Index Usage: Confirm if INDEX RANGE SCAN or INDEX
UNIQUE SCAN is used.
o Joins: Check join types (nested loops vs. hash joins).
o Sorts and Aggregations: Identify costly operations like
SORT ORDER BY.
5. Optimization Recommendations:
o Add/adjust indexes for filter and join columns.
o Rewrite queries for efficiency (e.g., replacing DISTINCT with
analytic functions).
Step 4: Iterative Improvement
1. Start with Top Queries:
o Focus on the top 5–10 queries with the highest combined
impact (Elapsed Time + Buffer Gets).
2. Validate Improvements:
o Test changes (e.g., index creation, query rewrites) in a staging
environment.
o Re-run execution plans to ensure the query performance
improves.
3. Monitor in Production:
o Use AWR or Active Session History (ASH) to validate post-
change performance.
Sample Prioritization Table
Elapsed
Elapsed Executi Priori
SQL ID Time/Ex Buffer Gets Notes
Time (s) ons ty
ec (ms)
Very
9rttw1358g 262,576,5 773,363. 143,311,119 frequent
339,524 High
ywj 98.6 9 ,546 execution,
needs index
Heavy I/O;
g4x3b5vnb 564,938,9 5,539,92 117,011,286 optimize
101,976 High
5zf0 44.5 0.6 ,818 table
partitions
Costly joins;
15stf55d7m 179,688,9 64,088,840, 10,754,7 Mediu consider
16,707.8
pxr 66.4 690 86 m denormaliza
tion
Large table
2a5dnp0sr7 726,832,6 244,683. 38,746,473, 2,970,50 scans;
High
7h3 57.8 1 111 6 partition
required
Presentation Plan
Focus Areas
1. Current Performance Issues:
o Highlight the top resource-consuming queries and their
metrics.
2. Root Causes:
o Lack of indexes, high buffer gets, frequent executions.
3. Proposed Solutions:
o Indexes, query rewrites, table partitioning.
4. Expected Outcomes:
o Reduction in Elapsed Time and Buffer Gets, lower CPU and I/O
waits.
Would you like me to help prioritize specific SQL IDs or refine the analysis
further?