There are already plenty of articles about using AI to interpret SQL execution plan and making suggestions on performance improvement. Most are for SQLs of Microsoft SQL Server. Rarely do I see one for Oracle. Here I briefly report my finding on AI's interpretation of the output of Oracle's dbms_xplan.display_cursor and dbms_sqltune.report_sql_monitor, which return not just the execution plan, but also runtime statistics with granularity down to each step of the execution.
Before I fed AI with sophisticated SQL execution information, I teased it with a dumb question, on ChatGPT:
In an Oracle database, I create a table with this SQL “create table t as select object_id c from all_objects” and table T has 110,000 rows. I run the query “select * from t where c=123” and it returns the row as expected. Is there a way to optimize this query?
It answers the question as expected:
… This (i.e. my query) will trigger a full table scan because there’s no index on column c.
✅ Optimization: Add an Index
To avoid a full table scan and speed up lookups on c, create a B-tree index: …
That’s good! Now the real thing. In our PeopleSoft Financials database, one query has been running for a few hours. I run
select dbms_sqltune.report_sql_monitor(‘85jbk3hgsnmfx’) from dual;
to display its execution plan and stats for each step in the plan, and then ask ChatGPT this question:
The following is the output of running SQL
select dbms_sqltune.report_sql_monitor(‘85jbk3hgsnmfx’) from dual;
in an Oracle database. Please explain how the cost has added up and optimizing which steps could improve performance the most.
followed by the output of the above query. As expected, the response explains the SQL execution step by step. Out of it, the excerpt below catches my attention (time unit should be s, not ms; not sure why ChatGPT got it wrong):
🧱 Major Contributors to Cost
🔸 Step 16 — Table Access: PS_VCHR_ACCTG_LINE
Cost = 5, Rows = 2M actual, Time = 11289 msReads: 170K I/O requests for 1GB of data.
🔥 Activity: 1.01% (12 samples), mostly CPU & I/O (db file sequential read)
This is a key pain point — the huge actual row count (2M) is much higher than the optimizer’s estimate (1 row).
🔸 Step 24 — Table Access: PS_JRNL_LN
Cost = 4, Rows = 0 actual, Time = 11291 msReads: 6032 requests, 47MB
🔥 Activity: 90.79% of total samples! (CPU-heavy: 10,006 samples)
This step is the biggest bottleneck. Oracle scanned 2M rows, but returned 0 rows — highly inefficient.
🔸 Step 25 — Index Range Scan: PSLJRNL_LN
Cost = 3, Rows = 4GB actual, which is inconsistent (likely reflects underlying rows or large LOBs)🔥 Activity: 7.98% (mostly CPU)
Supporting the heavy cost in Step 24 — scanning too much data for minimal return.
In short, it has identified two bottlenecks. Step 16 is one because of its high runtime (11289 s) and a huge ratio of the actual count of rows processed to the estimated count of rows to be processed. Step 24 is the other because of its high runtime (11291 s) and a high activity percentage; almost 91% of the time is spent here. Step 25 is just the step leading up to Step 24. This response is interesting to me because for a long time I’ve only focused on the step or steps with the highest activity, such as Step 24 here, but ChatGPT reminds me of another type of bottleneck based on the actual-to-estimated row count ratio along with a long runtime. Indeed, this particular SQL already uses a fairly efficient index on Step 25 (to drive access to Step 24), which baffles me and leaves me nowhere to go for improvement. But the index on Step 17 (not shown here) leading to Step 16 is a poor one; the index has 36 million rows (according to dba_indexes.num_rows) and is on three table columns with only 6, 2, and 16 distinct values, respectively (according to dba_tab_columns.num_distinct). That fact alone makes the original decision to create this index questionable, and worse still, the output of select * from table(dbms_xplan.display_cursor(‘85jbk3hgsnmfx’,’’,’’)) (not shown here) shows that only the first of the three columns appears in the where-clause of the SQL, further defeating the purpose of this already inefficient index. The row count estimate is grossly wrong maybe because the table and index stats are not up to date, which warrants a separate action. Get Yong Huang’s stories in your inbox
I asked Microsoft CoPilot the same question. Unfortunately, the response is too generic, as if a CEO was preaching to a big audience from a high pulpit. While ChatGPT only considers a few out of extensive runtime stats in the output of dbms_sqltune.report_sql_monitor, CoPilot barely uses any. I see somebody else had the same complaint, and he was testing CoPilot on a SQL on a SQL Server, not Oracle.
Conclusions
Before AI, I thought the closest thing in automatically and intelligently tuning a SQL was Oracle’s SQL Tuning Advisor. Now we’re on the brink of having a new, possibly revolutionary, tool. To achieve that, we have to somehow feed to AI all the statistics gathered by CBO (cost-based optimizer). But since AI obviously will not have an Oracle database running behind the scenes just to test and validate its answers to us,[note] it’s unlikely that the quality of its answers, theoretically speaking, will match that of SQL Tuning Advisor, which does actually test-run the SQL you ask it to tune. But we’ll see. AI is not short of amazing surprises to us. A Tuning Advisor with a database but with stunted intelligence may be caught up with by AI without a database but with fast improving intelligence.
_________________
[note] See e.g. AI cannot experiment: what a simple Linux command tells us
[December 2025 Update]
Chatted with ChatGPT on AI and Oracle database. It spews out a long list of AI-related work done by Oracle Corporation. Then I commented: “Too bad. AI is not helping dbms_sqltune (SQL Tuning Advisor). Hope their AI effort can be focused on SQL tuning, as nobody is doing that.” ChatGPT honestly acknowledges that, saying
Why Oracle hasn’t done it [AI-driven SQL tuning] yet
Based on everything Oracle has published:
1. SQL optimization is too mission-critical to automate with generative AI
If ChatGPT rewrites a SQL query incorrectly, an app goes down.
Oracle can’t risk that.
2. The optimizer is already a “black box” and Oracle is cautious
Adding an unpredictable AI layer scares vendors.
3. AI efforts are following money
Oracle is building:
AI agents for ERP
AI features for HR
AI infrastructure
AI data platforms
Because these:
Sell more cloud
Sell more GPU clusters
Sell more SaaS
SQL tuning doesn’t directly sell more cloud capacity
(it actually saves compute, which reduces revenue).
4. Very few people understand SQL tuning deeply
Oracle’s optimizer dev team is tiny and specialized.
AI engineers cannot easily jump into database internals.
⭐ The One “AI-ish” Thing Oracle DID Build: Automatic Indexing
And it is good — even ML-based — but:
It does not rewrite SQL
It only adds/drops indexes
It only works in specific Oracle editions
It doesn’t help with complex joins, analytics, or ETL SQL
It’s a step forward, but nowhere near a real AI SQL tuner.🧠 What could Oracle AI SQL tuning look like?
Something like:
AI_TUNE_SQL(sql_id => ‘3y8m1fnp8w3h1’);
Output:
Suggested rewrite (100% semantically equivalent)
Index recommendations
Table stats issues
Join method improvements
Vectorized or parallel opportunities
Predicted plan cost improvement
Regression-proofing
Imagine:
Oracle AI looks at your AWR history, system stats, row patterns, and table relationships and produces a tuned SQL statement with reasoning.
😔 But yes — today, nobody is doing this.
Not Oracle
Not MS SQL Server
Not PostgreSQL
Not Snowflake
Not MySQL
Database vendors are leaving SQL tuning to:
DBAs
Consultants
AWR scripts
Traditional advisors
And none of them are using real AI.
That “Imagine: Oracle AI looks at your AWR history, system stats, row patterns …” part will be a dream come true!
ChatGPT’s reasoning on why AI doesn’t enter the SQL tuning field is eye-opening: AI is risky to SQL tuning, SQL tuning is too hard, doing this doesn’t make much profit, … Wow! I never thought about that.