Jonathan Lewis

Subscribe to Jonathan Lewis feed Jonathan Lewis
Just another Oracle weblog
Updated: 52 min 37 sec ago

Faking Histograms

Mon, 2018-10-15 07:37

This is a short index of articles I’ve written on how to create the different types of histogram that the optimizer uses:

  • Faking a frequency histogram    How to create frequency histograms (using a numeric column for the example)
  • Histogram Tip  An example of creating a simple character-based frequency histogram (published in the IOUG Tips booklet 2014).
  • Faking a height-balanced histogram  How to create a height-balanced histogram (using a numeric column for the example).
  • Hybrid Fake: How to create a hybrid histogram (using a character column for the example).
  • Extended Histogram:  faking values into a histogram for a column group – only special because we need to derive the value stored.
  • Top frequency:  I haven’t yet worked out how to fake a Top Frequency histogram. Since it’s little more than a frequency histogram where the optimizer knows there’s a further small percentage (less than one bucketful) of other data, this doesn’t worry me; if necessary I’ll just create a “good enough” frequency histogram and set a suitable density for the remainder.

And a couple of miscellaneous things about histograms

  • Big number problem – older versions of Oracle (pre 12c) can go wrong with data values more than 15 digits long
  • Long strings problem – until 12c Oracle stored at most 32 bytes of a string in the endpoint_actual_value column.
  • Hybrid/Top-N problem – a bug, fixed in 12.2 with a patch for 12.1.
  • Upgrade threat – a step you need to take to upgrade from 11.2.0.3 if you have histograms on char() columns
  • Upgrade threat 2 (Oracle-L) – if you’ve got a big history of histograms then the upgrade from 11.2.0.3 (or earlier) could take a long time

 

Soup

Sun, 2018-10-14 13:02

When my mother-in-law comes round to Sunday lunch we often have roast chicken – and a serious error in estimating the requirement for the vegetable bed I was roasting on led to the discovery of home-made soup. (I did warn you that my post-operative posts would be light-weight)

Ingredients
  • Remnants of cooked chicken
    • (or 250ml, 1/3rd pint, one cup of  boiled water with a vegetable or chicken stock cube)
  • One large carrot
  • One small parsnip
  • One medium onion
    • The three vegetables should be similar in weight: roughly 200g, a bit less than 1/2 lb.
    • (Or just about any leftover vegetables from a roast dinner – I’ve even used leeks in cheese sauce)
  • Greek yoghurt (One tablespoon)
  • Choice of herbs and seasoning.

 

Method
  • Discard any large areas of fatty skin from the chicken carcase
  • Break up the carcase and place in saucepan with 500ml (2 cups, 3/4 pint) water
  • Bring to the boil and simmer for about 30 minutes with saucepan lid on.
  • Strain into a fresh saucepan, discard the remnants of chicken

 

  • Top, tail and peel the carrot, cut into discs
  • Top, tail and peel turnip, cut into small chunks
  • Peel and dice the onion.
  • Mix the stock and vegetables in a fresh saucepan
  • add herbs and seasoning
    • (I like a teaspoon of chopped tarragon – fresh from the garden)
  • Simmer with lid on for about 20 minutes
    • (until the carrots are softened)

 

  • Tip contents into blender
  • Blend until smooth.
  • Add a rounded tablespoon of plain Greek yoghurt, and blend
  • Serve – 2 large or 3 small portions

Despite looking a bit boring before it goes into the blender the soup tends to come out a surprisingly cheerful sunshine yellow thanks to the carrot. If you’ve managed to get the same results as I do then, because of the yoghurt I think, the texture will be almost like a thick foam or very light mousse.

I typically end up making the stock immediately after lunch is over, then keep it in the fridge for a couple of days before making the soup; that does mean you can skim off any excess fat before using the stock for the soup. And if there’s any gravy left over from lunch that’s a bonus to go in the soup.

 

dbms_log

Fri, 2018-10-12 10:39

I’ve been a long time, though occasional, user of the undocumented dbms_system package, typically using it to write messages or insert break marks in trace files (or the alert log). Thanks to an email from Cary Millsap I’ve recently discovered that the procedures for writing to trace files have been copied to a separate dbms_log package – which is nice because some of the things in dbms_system shouldn’t be made available to general code, for example dbms_system has a procedure kcfrms which resets a number of the “max time” columns in various dynamic performance views. It can be very useful occasionally – you might even want to call it just before or just after every AWR snapshot – but I’d rather that no-one else was able to call if I thought that I needed to  do so.

The dbms_log package is also (currently – even in 18.3) undocumented, but maybe one day soon it will appear in the PL/SQL Packages and Types reference manual. The procedures available in the package are as follows:

SQL> desc dbms_log
PROCEDURE KSDDDT
PROCEDURE KSDFLS
PROCEDURE KSDIND
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 LVL				BINARY_INTEGER		IN
PROCEDURE KSDWRT
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DEST				BINARY_INTEGER		IN
 TST				VARCHAR2		IN
  • ksdddt move to a new line in the trace file, writes the date and moves to the next line – but won’t do anything if the trace file has not already been opened so if you want a datestamp at the top of the output you actually have to start with a ksdwrt or ksdind call.
  • ksdfls flushes any pending writes to the trace file / alert log and closes the file – which isn’t relevant in my example, but would make a difference to when you see the text that has been written from inside a pl/sql block.
  • ksdind writes an “indent” of lvl colon (‘:’) symbols to the trace file. This is a one-off effect, it doesn’t set an indent for all future lines, it merely writes the ‘:’ so that the next call to ksdwrt appends its text after the colons.
  • ksdwrt writes tst to the trace file if dest = 1, to the alter log if dest = 2 and both if dest = 3, adding a new-line at the end of the text.

Here’s a sample of code calling the procedures to write something to my trace file:


execute dbms_log.ksdddt
execute dbms_log.ksdwrt(1,'Starting')
execute dbms_log.ksdddt
execute dbms_log.ksdwrt(1,'Underlining')
execute dbms_log.ksdind(20)
execute dbms_log.ksdwrt(1,'Indented')
execute dbms_log.ksdwrt(1,'Not Indented')
execute dbms_log.ksdind(30)
execute dbms_log.ksdddt
execute dbms_log.ksdwrt(1,'Finished')
execute dbms_log.ksdind(30)
execute dbms_log.ksdfls


Here’s the text that appears in the trace files:


Starting

*** 2018-10-04T16:31:15.525515+01:00 (ORCL(3))
Underlining
::::::::::::::::::::Indented
Not Indented
::::::::::::::::::::::::::::::
*** 2018-10-04T16:31:15.532881+01:00 (ORCL(3))
Finished
::::::::::::::::::::::::::::::

Note how the call to ksdddt in line 1 of code didn’t write a date into the trace file. The call to ksdwrt in line 2 writes ‘Starting’ and move to a new line so we get a blank line when the call to ksdddt in line 3 moves to a new line and writes the date. At line 5 we “indent 20”, so the ksdwrt at line 6 starts after the string of colons, then moves to a new line where the indent is not repeated. We indent again at line 8, which leaves us at the end of a line, so when we call ksdddt it moves to the start of the next line and writes the date there, we don’t get a blank line.

Footnote: when I saw Cary Millsap’s note I assumed that the procedures had been copied across in a recent version of Oracle; in fact dbms_log has been around since at least 11.2.0.4

Footnote 2: While checking my library for references to dbms_system I came across a script I’d used to create a local version of dbms_system that allowed me to execute a call to “dbms_system.set_bool_param_in_sesssion(‘#_SILVER_BULLET’, true)”. I used it at an IOUG conference a few years ago to demonstrate that if you set this “very hidden” parameter to true then some of your queries could run faster.  (What the call actually did was enable query rewrite and disable function-based indexes so that a special time-wasting index I’d created couldn’t get used.)

 

Hybrid Fake

Wed, 2018-10-10 09:12

Oracle 12c introduced the “Hybrid” histogram – a nice addition to the available options and one that (ignoring the bug for which a patch has been created) supplies the optimizer with better information about the data than the equivalent height-balanced histogram. There is still a problem, though, in the trade-off between accuracy and speed: just as it does with height-balanced histograms when using auto_sample_size Oracle samples (typically) about 5,500 rows to create a hybrid histogram, and the SQL it uses to generate the necessary summary is essentially an aggregation of the sample, so either you have a small sample with the risk of lower accuracy or a large sample with an increase in workload. This being the case it’s worth knowing how to create a hybrid histogram using the dbms_stats.set_column_stats() API.

It’s fairly easy to identify the cases where a hybrid histogram could be helpful.  You have a large volume of data spread over a large number (more than 2048) of distinct values, but a few values (typically less than 250) which are responsible for a significant fraction of the data. You would like to tell Oracle about the special “extreme” cases so that the optimizer can take defensive if you query for one of those values, but at the same time you would like to give Oracle a picture of the way the rest of the data is distributed. This is similar in some respects to the Top-N (a.k.a. Top-Frequency) histogram which says to Oracle “We have a small number of popular values, and some odds and ends on the side that are pretty ignorable”, the critical difference is that you need the hybrid histogram when it’s not safe to “ignore” the odds and ends.

Here’s an example of creating some data and then generating a completely artificial hybrid histogram. The code demonstrates 3 points – the principle feature of creating hybrid histograms and a couple of generic details about Oracle’s histograms:

  • The main point is that Oracle 12c introduces a new numeric array in the dbms_stats.statrec structure. This allows each row (bucket) in a histogram to hold a second statistic about the bucket so we can now store a frequency figure for the bucket as a whole, and a “repeat-count” figure for the highest value in the bucket. (Warning – there is a counter-intuitive conflict between the name of the new structure and the way it is used for hybrid histograms).
  • As side-point I’ve included a code variation that shows you the remarkable similarity between generating a Frequency histogram and a Hybrid histogram.
  • As a second side-point I have also highlighted the effect you see in the dba_tab_histograms view when your popular values are “too similar” to each other – i.e. when they match on the first 6 characters.

We start by creating a table as a copy of the view all_objects – then we’re going to create a hybrid histogram on the object_type column that looks nothing like the  data. The histogram will say:

  • for every 15,000 rows (where the column is not null)
    • 5,000 will have values less than or equal to ‘C’, of which 3,000 will have the value ‘C’
    • The next 2,000 (i.e. running total 7,000) will have values greater than ‘C’ and up to ‘PPPPPP1’, but ‘PPPPPP1’ itself is not a popular value
    • The next 2,000 (i.e. running total 9,000) will have values greater than ‘PPPPPP1’ and up to ‘PPPPPP2’, but ‘PPPPPP2’ itself is not a popular value
    • The next 2,000 (i.e. running total 11,000) will have values greater than ‘PPPPPP2’ and up to ‘PPPPPP3’, but ‘PPPPPP3’ itself is not a popular value
    • The last 4,000 (i.e. running total 15,000) will have values greater than ‘PPPPPP3’ and up to ‘X’ of which 3,000 will have the value ‘X’

Note particularly that the “how many rows hold the endpoint value” are stored in the statrec.bkvals array – just as they would be for a frequency histogram – and the cumulative count of rows is stored in the statrec.rpcnts structure. All we have to do to create a frequency histogram instead of a hybrid histogram is to store zeros in the statrec.rpcnts structure, or leave it uninitialized.

You’ll notice that since I’m creating a histogram on a character column I’ve used an array of type dbms_stats.chararray to hold the list of values (in ascending order) that I want the histogram to describe.


rem
rem     Script:         12c_hybrid_histogram_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2018
rem 

create table t1
as
select * from all_objects
;

begin
        dbms_stats.gather_table_stats(
                ownname         => null,
                tabname         => 't1',
                method_opt      => 'for all columns size 1'
        );
end;
/

declare
                c_array         dbms_stats.chararray;
                m_rec           dbms_stats.statrec;
                m_distcnt       number;
                m_density       number;
                m_nullcnt       number;
                m_avgclen       number;

begin
        dbms_stats.get_column_stats(
                ownname         => user,
                tabname         => 'T1',
                colname         => 'OBJECT_TYPE', 
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => m_rec,
                avgclen         => m_avgclen
        );

        m_rec.epc    := 5;

        c_array      := dbms_stats.chararray( 'C',  'PPPPPP1',  'PPPPPP2',  'PPPPPP3',   'X');
        m_rec.bkvals := dbms_stats.numarray (3000,          1,          1,          1,  3000);

        m_rec.rpcnts := dbms_stats.numarray (5000,       7000,       9000,      11000, 15000);
--      m_rec.rpcnts := dbms_stats.numarray (0000,       0000,       0000,       0000, 00000);

        dbms_stats.prepare_column_values(m_rec, c_array);

        dbms_stats.set_column_stats(
                ownname         => user,
                tabname         => 'T1',
                colname         => 'OBJECT_TYPE', 
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => m_rec,
                avgclen         => m_avgclen
        ); 
end;
/

That’s it – it’s remarkably simple. To show the effect of running this code I can report the content of user_tab_histograms for the column. I’ve actually run the code and queried the results twice; first for the case where I created the hybrid histogram and then after modifying the PL/SQL block to set the rpcnts array to zeros to create a frequency histogram.


column endpoint_actual_value format a22
column endpoint_value        format 999,999,999,999,999,999,999,999,999,999,999,999

select
        endpoint_number, endpoint_value, endpoint_actual_value, endpoint_repeat_count
from
        user_tab_histograms
where
        table_name = 'T1'
and     column_name = 'OBJECT_TYPE'
order by
        endpoint_value
;

With non-zero rpcnts (hybrid histogram)
=======================================
ENDPOINT_NUMBER                                   ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE  ENDPOINT_REPEAT_COUNT
--------------- ------------------------------------------------ ---------------------- ---------------------
           3000  347,883,889,521,833,000,000,000,000,000,000,000 C                                       3000
           7000  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP1                                    1
           9000  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP2                                    1
          11000  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP3                                    1
          15000  456,922,123,551,065,000,000,000,000,000,000,000 X                                       3000


With rpcnts set to zero (frequency histogram)
=============================================
ENDPOINT_NUMBER                                   ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE  ENDPOINT_REPEAT_COUNT
--------------- ------------------------------------------------ ---------------------- ---------------------
           3000  347,883,889,521,833,000,000,000,000,000,000,000 C                                          0
           3001  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP1                                    0
           3002  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP2                                    0
           3003  417,012,704,559,973,000,000,000,000,000,000,000 PPPPPP3                                    0
           6003  456,922,123,551,065,000,000,000,000,000,000,000 X                                          0

I made a comment earlier on that the naming and use of the rpcnts structure was somewhat counter-intuitive. As you can see in the results above, when I created the hybrid histogram the values I stored in the rpcnts structure are not the values reported as the “repeat count”, the numbers reported as the “repeat count” are from the bkvals (bucket values).  As far as I’m concerned this means I have to go back to my basic examples every time I want to fake a histogram because I’m never too sure which arrays I should populate with what values – and whether I should use absolute or cumulative values.

One last minor point: you’ll see that the endpoint_actual_value has been populated in this example. This is because (with Oracle’s interesting transformation from character to numeric) the three ‘PPPPPPx’ character values turn into the same number – so Oracle stores the first 64 bytes (or 32 for versions of Oracle prior to 12c) of the actual value.

 

Eye, Eye, Cap’n

Wed, 2018-10-10 08:00

By the time you read this I will have had the lenses in both my eyes replaced, so I won’t be staring at a computer screen for a while – and that means, in particular, I won’t be doing any further investigation into join cardinality for a while.

For those who might otherwise feel deprived of my exquisite prose I have, however, prepared a couple of lightweight articles for automatic launching over the next few days. But please don’t expect any prompt follow-ups if you add comments or send email over the next couple of weeks.

Join Cardinality – 3

Tue, 2018-10-09 07:01

In the previous posting I listed the order of precision of histograms as:

  • Frequency
  • Top-Frequency
  • Hybrid
  • Height-balanced
  • None

Having covered the Frequency/Frequency join (for a single column, no nulls, equijoin) in the previous posting I’ve decided to work down the list and address Frequency/Top-Frequency in this posting. It gets a little harder to generate data as we move to the less precise histograms since we need to have skew, we want some gaps, and (for Top-Frequency) we need to have some data that can be “ignored”. On the plus side, though, I want to work with a small number of buckets to keep the output of any queries I run fairly short so I’m going to stick with a small number of buckets, which means the “small” volume of “ignorable” data (the “spare” bucket) can be relative large. Here’s the code I used to generate data for my investigation – 100 rows for the table with a frequency histogram and 800 rows for the table with a top-frequency.


rem
rem     Script:         freq_hist_join_05.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem     Purpose:        
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem

execute dbms_random.seed(0)

create table t1 (
        id              number(6),
        n04             number(6),
        n05             number(6),
        n20             number(6),
        j1              number(6)
)
;

create table t2(
        id              number(8,0),
        n20             number(6,0),
        n30             number(6,0),
        n50             number(6,0),
        j2              number(6,0)      
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   4) + 1                    n04,
        mod(rownum,   5) + 1                    n05,
        mod(rownum,  20) + 1                    n20,
        trunc(2.5 * trunc(sqrt(v1.id*v2.id)))   j1
from
        generator       v1,
        generator       v2
where
        v1.id <= 10 -- > comment to avoid WordPress format issue
and     v2.id <= 10 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   20) + 1                   n20,
        mod(rownum,   30) + 1                   n30,
        mod(rownum,   50) + 1                   n50,
        28 - round(abs(7*dbms_random.normal))        j2      
from
        generator       v1
where
        rownum <= 800 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1 for columns j1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname          => null,
                tabname          => 'T2',
                method_opt       => 'for all columns size 1 for columns j2 size 16'
        );
end;
/

In this example I’ve used the sqrt() function and the dbms_random.normal() function to generate the data. The scaling and truncating I’ve done on the results has given me two sets of data which have a nice skew, some gaps, but different patterns (though both have a small number of small values and a larger number of larger values). The data from dbms_random.normal() will produce 22 distinct values, so I’ve requested a histogram with 16 buckets and checked that this will produce a Top-Frequency histogram. (If I want a Hybrid histogram – for the next thrilling installment in the series – I’ll just reduce the number of buckets slightly).

Here are the resulting stats, preceded by the code that reported them:


select  table_name, column_name, histogram, num_distinct, num_buckets, density
from    user_tab_cols
where   table_name in ('T1','T2')
and     column_name in ('J1','J2')
order by table_name
;

select  table_name, num_rows
from    user_tables
where   table_name in ('T1','T2')
order by table_name
;

break on table_name skip 1 on report skip 1

with f1 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T1' 
and     column_name = 'J1'
order by 
        endpoint_value
),
f2 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T2' 
and     column_name = 'J2'
order by 
        endpoint_value
)
select f1.* from f1
union all
select f2.* from f2
order by 1,2
;


TABLE_NAME           COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS    DENSITY
-------------------- -------------------- --------------- ------------ ----------- ----------
T1                   J1                   FREQUENCY                 10          10       .005
T2                   J2                   TOP-FREQUENCY             22          16    .000625

TABLE_NAME             NUM_ROWS
-------------------- ----------
T1                          100
T2                          800

TABLE_NAME                VALUE ROW_OR_BUCKET_COUNT ENDPOINT_NUMBER
-------------------- ---------- ------------------- ---------------
T1                            2                   5               5
                              5                  15              20
                              7                  15              35
                             10                  17              52
                             12                  13              65
                             15                  13              78
                             17                  11              89
                             20                   7              96
                             22                   3              99
                             25                   1             100

T2                            1                   1               1
                             13                  14              15
                             15                  11              26
                             16                  22              48
                             17                  34              82
                             18                  31             113
                             19                  36             149
                             20                  57             206
                             21                  44             250
                             22                  45             295
                             23                  72             367
                             24                  70             437
                             25                  87             524
                             26                 109             633
                             27                  96             729
                             28                  41             770

Table t1 reports 100 rows, 10 distinct values and a Frequency histogram with 10 buckets.
Table t2 reports 800 rows, 22 distinct values and a Top-Frequency histogram with 16 buckets.

Things we notice from the histograms are: t1 has a range from 2 to 25, while t2 has a range from 1 to 28. We also notice that the highest endpoint_number for t2 is only 770 out of a possible 800 – we’ve “lost” 30 rows. We don’t really care what they are for the purposes of the arithmetic, but if we did a quick “select j2, count(*)” query we’d see that we had lost the following:


SQL> select j2, count(*) from t2 group by j2 order by count(*), j2;

	J2   COUNT(*)
---------- ----------
	 1	    1
	 9	    1  *
	 8	    3  *
	11	    4  *
	10	    5  *
	12	    8  *
	14	    9  *
	15	   11
...

The reason why the total number of rows accounted for is less than the total number of rows in the table comes in two parts. The Top-Frequency histogram is designed to hold the Top N most popular entries in the table, so there will be some entries that don’t make an appearance in the histogram despite contributing rows to the total table count; the number of “lost” rows can then be increased because the Top N popular values may not include the column low and high values, and these two values must appear in the histogram. Looking at the output above we can see that we could have reported 14 as the 16th most popular value, instead we have to record 1, losing a further 9 rows and regaining 1.

Let’s test the pure join query on the two tables to see what the optimizer is predicting as the join cardinality, and then try to re-create that cardinality from the histogram data:


alter session set statistics_level = all;
alter session set events '10053 trace name context forever';
alter session set tracefile_identifier='BASELINE';

select
        count(*) 
from
        t1, t2
where
        t1.j1 = t2.j2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;
alter session set events '10053 trace name context off';


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |      41 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |      41 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |   1608 |   1327 |00:00:00.01 |      41 |  2545K|  2545K| 1355K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |       7 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    800 |    800 |00:00:00.01 |       7 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."J1"="T2"."J2")

Our target is to work out how we can query the histogram data in a way that gets the result 1,608. Ideally we’ll also think of a rationale for justifying our method, and then we’ll apply the same method with 15 buckets and 17 buckets, and with a couple of variations to the data (e.g. update all rows where j1 = 25 to set j1 = 28), to see if the method still gets the right result.

All we did with the frequency/frequency join was to join the two histograms on matching values, multiply the frequencies on each resulting row , then sum down the set, and this automatically eliminated rows which were outside the “highest low” and “lowest high” (i.e. we only examined rows where the histograms overlapped). We might hope that things shouldn’t be too different when one of the histograms is a top-frequency histogram.

There is an important difference, though, between frequency and top-frequency histograms – in the latter case there are values in the table which will not be in the histogram, so we ought to make some allowance for these (even though it’s only “one bucket’s worth”). It’s possible that some of these values might match values in the frequency histogram so we need to include a mechanism for adding in a factor to allow for them. So as a first step let’s work out the “average number of rows per value” for the missing values.

We have 22 distinct values and 16 end points so there are 6 missing values. We have 800 rows in the table but only 770 rows reported in the histogram so there are 30 missing rows. So let’s say the missing values have an average cardinality of 30/6 = 5 (and we might extend that to say they have an average selectivity of 5/800 = 0.00625).

Let’s bring that value into the query we wrote for the frequency/frequency case by using an outer join (which I’ll write as an “ANSI” Full Outer Join”) with a predicate in place that restricts the result to just the overlapping range, which is [2,25], the “higher low value” and “lower high value” across the two histograms. Here’s some code – with an odd little detail included:


column product format 999,999,999.99
compute sum of product on report

compute sum of t1_count on report
compute sum of t1_value on report
compute sum of t2_count on report
compute sum of t2_value on report

with f1 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T1' 
and     column_name = 'J1'
order by 
        endpoint_value
),
f2 as (
select 
        table_name,
        endpoint_value                                                            value, 
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count,
        endpoint_number
from 
        user_tab_histograms 
where 
        table_name  = 'T2' 
and     column_name = 'J2'
order by 
        endpoint_value
)
select
        f1.value f1_value,
        f2.value f2_value,
        nvl(f1.row_or_bucket_count,0.00) t1_count, 
        nvl(f2.row_or_bucket_count,800*0.00625) t2_count,
        nvl(f1.row_or_bucket_count,0.00) * 
        nvl(f2.row_or_bucket_count,800*0.006250) product
from
        f1
full outer join
        f2
on
        f2.value = f1.value
where
        coalesce(f1.value, f2.value) between 2 and 25
order by
        coalesce(f1.value, f2.value)
;

I’ve included an nvl() on the columns for the top-frequency histograms that convert nulls (i.e. the preserved rows derived from the frequency histogram) into the average frequency we’ve just calculated, using the “num_rows * selectivity” representation. The odd little detail that I commented on above does something similar for the preserved rows derived from the top-frequency histogram because this first guess at the calculation was wrong and needed an adjustment which I’m anticipating. Here are the results I got with this code:

  T1_VALUE   T2_VALUE   T1_COUNT   T2_COUNT         PRODUCT
---------- ---------- ---------- ---------- ---------------
         2                     5          5           25.00
         5                    15          5           75.00
         7                    15          5           75.00
        10                    17          5           85.00
        12                    13          5           65.00
                   13          0         14             .00
        15         15         13         11          143.00
                   16          0         22             .00
        17         17         11         34          374.00
                   18          0         31             .00
                   19          0         36             .00
        20         20          7         57          399.00
                   21          0         44             .00
        22         22          3         45          135.00
                   23          0         72             .00
                   24          0         70             .00
        25         25          1         87           87.00
---------- ---------- ---------- ---------- ---------------
       135        233        100        548        1,463.00

The figure is too low, so there has to be an adjustment. What if the code is allowing for the “maybe there are other values” algorithm that the optimizer uses with fequency histograms ? If you’ve gathered a frequency histogram on a column but query it with a value that isn’t in the histogram than Oracle applies an algorithm that looks like: “if you’re asking for something that isn’t in the histogram I’ll assume that there must be some data there and use a frequency that’s half the lowest frequency I have recorded”**Important footnote. The value 25 appears once in our histogram so let’s include a fudge-factor of 0.5 (i.e. half a row) in the nvl() expression for the t1 frequencies and see what happens. This is what the new results look like:


  T1_VALUE   T2_VALUE   T1_COUNT   T2_COUNT         PRODUCT
---------- ---------- ---------- ---------- ---------------
         2                     5          5           25.00
         5                    15          5           75.00
         7                    15          5           75.00
        10                    17          5           85.00
        12                    13          5           65.00
                   13         .5         14            7.00
        15         15         13         11          143.00
                   16         .5         22           11.00
        17         17         11         34          374.00
                   18         .5         31           15.50
                   19         .5         36           18.00
        20         20          7         57          399.00
                   21         .5         44           22.00
        22         22          3         45          135.00
                   23         .5         72           36.00
                   24         .5         70           35.00
        25         25          1         87           87.00
---------- ---------- ---------- ---------- ---------------
       135        233      103.5        548        1,607.50

Since we were looking for 1,608 I’m going to call that a success. I can check precision, of course, by looking at the 10053 trace file. Extracting a few critical lines:

egrep -e"Density" -e"Join Card" orcl12c_ora_6520_BASELINE.trc

    AvgLen: 3 NDV: 22 Nulls: 0 Density: 0.006250 Min: 1.000000 Max: 28.000000
    AvgLen: 3 NDV: 10 Nulls: 0 Density: 0.005000 Min: 2.000000 Max: 25.000000

Join Card:  1607.500000 = outer (100.000000) * inner (800.000000) * sel (0.020094)

The “Density” lines come from the column statistics – note the 0.00625 that matches the “average selectivity” I derived from the top-frequency figures. You might also note that the “half the least frequent value” could be derived from the t1.j1 density (0.005) * t1.num_rows (100).

The “Join Card” line is exactly what it says – the join cardinality calculation showing that the plan’s prediction of 1,608 rows was actually a rounded 1607.5

There is one more important thing to check before I start tweaking the data to see if there are any other factors involved. Is the 0.5 I stuck into the query really the value of “half the least common frequency” or is it a fixed value in all cases. A nice easy way of testing this is to update the t1 table to change one row from 22 to 25 (22 will still be present in the table and histogram before and after this test, so it’s a minimal and safe change). Making this change and re-running the calculation query leaving the 0.5 unchanged gives the following:


update t1 set j1 = 25 where j1 = 22 and rownum = 1;

...

		   21	      .5	 44	      22.00
	22	   22	       2	 45	      90.00
		   23	      .5	 72	      36.00
		   24	      .5	 70	      35.00
	25	   25	       2	 87	     174.00
		      ---------- ---------- ---------------
sum			   103.5	548	   1,649.50

Without reporting all the details:

  • the estimate in the plan went up from 1,608 to 1,794
  • leaving 0.5 in the query the derived result was 1,649.5 (last few lines of output above)
  • changing the 0.5 to 1.0 the derived result was 1,794.0

Conclusion – the “fudge factor” is consistent with the model the optimizer uses with frequency histogram calculations. The optimizer models “missing” rows in the join calculation as “half the number of the least frequently occuring value**Important footnote

Filter Predicates:

After a dozen tests varying the number of buckets in the top-frequency histogram (and checking it really was still a top-frequency histogram), and tweaking the t1 (frequency histogram) data to use values on the boundaries of, or outside, the range of the t2 (top-frequency) data, I concluded that my approach was probably correct. Outer join the two histograms, restrict to the overlap, supply the “num_rows * density” figure on the top-frequency side, and “half the lowest frequency”**Important footnote on the frequency side, and the query produces the same result as the optimizer for the pure join cardinality.

So the next step is to check what happens when you add filter predicates on one, or both, sides. I listed a fragment of code earlier on to execute the pure join and count the number of rows it produced, enabling the 10053 trace and pulling the actual plan from memory at the same time. I repeated this code with 3 variations and checked the “Join Card” lines from the resulting trace files:


select count(*) from  t1, t2 where  t1.j1 = t2.j2
select count(*) from  t1, t2 where  t1.j1 = t2.j2 and t1.n04 = 2
select count(*) from  t1, t2 where  t1.j1 = t2.j2                and t2.n30 = 25
select count(*) from  t1, t2 where  t1.j1 = t2.j2 and t1.n04 = 2 and t2.n30 = 25

egrep -e"Join Card" orcl12c_ora_10447*.trc

orcl12c_ora_10447_BASELINE.trc:Join Card:  1607.500000 = outer (800.000000) * inner (100.000000) * sel (0.020094)
orcl12c_ora_10447_FILTERJ1.trc:Join Card:  401.875000 = outer (800.000000) * inner (25.000000) * sel (0.020094)
orcl12c_ora_10447_FILTERJ2.trc:Join Card:  53.583333 = outer (100.000000) * inner (26.666667) * sel (0.020094)
orcl12c_ora_10447_FILTJ1J2.trc:Join Card:  13.395833 = outer (26.666667) * inner (25.000000) * sel (0.020094)

As you can see in all 4 cases, Oracle reports an inner and outer cardinality estimate and a join selectivity. The join selectivity remains unchanged throughout; it’s the value we can derive from our pure join test (0.020094 = 1607.5 / (100 * 800)). All that changes is that the individual table predicates are applied to the base tables before the join selectivity is applied to the product of the filtered base table cardinalities:

  • Column n04 has 4 distinct values in 100 rows – filter cardinality = 100/4 = 25
  • Column n30 has 30 distinct values in 800 rows – filter cardinality = 800/30 = 26.66666…
Conclusion

For a single column equijoin on columns with no nulls where one column has a frequency histogram and the other has a top-frequency histogram the optimizer calculates the “pure” join cardinality using the overlapping range of column values and two approximating frequencies, then derives the filtered cardinality by applying the base table filters, calculates the cardinality of the cartesian join of the filtered data sets, then multiplies by the pure join selectivity.

 

 

**Important Footnote  Until Chinar Aliyev questioned what I had written, I had never noticed that the “half the lowest frequency” that I describe at various point in the arithmetic was anything other than a fixed fudge factor. In fact, in perfect symmetry with the expression used for the average selectivity in the top-frequency part of the calculcation, this “fudge factor” is simple “num_rows * column_density” for the column with the frequency histogram. (Whether the “half the lowest frequency” drops out as a side effect of the density calculation, or whether the column density is derived from half the lowest frequency is another matter.)

Random Upgrade

Mon, 2018-10-08 07:36

Here’s a problem that (probably) won’t affect the day to day running of most systems – but it could be a pain in the backside for people who write programs to generate repeatable test data. I’m not going to say much about the problem, just leave you with a test script.


rem
rem	Script	random_upgrade.sql
rem	Author:	Jonathan Lewis
rem	Dated:	Oct 2018
rem
rem	Last tested
rem		18.3.0.0
rem		12.2.0.1
rem	Notes
rem	In the upgrade from 12.2.0.1 something
rem	changed that meant
rem		create as select dbms_random
rem	gets different data from
rem		select dbms_random
rem

drop table t4 purge;
drop table t3 purge;
drop table t2 purge;
drop table t1 purge;
drop table t0 purge;

set feedback off

create table t0 as
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
;


execute dbms_random.seed(0);

create table t1
as
select dbms_random.normal
from
	t0
;

execute dbms_random.seed(0);

create table t2
as
with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	dbms_random.normal
from
	g1
;

prompt	=================
prompt	Diff the two CTAS
prompt	=================

select count(*)
from (
select * from t1
minus
select * from t2
union all
select * from t2
minus
select * from t1
)
;


create table t3 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

create table t4 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

execute dbms_random.seed(0)

insert into t3
select dbms_random.normal
from
	t0
;

execute dbms_random.seed(0)

insert into t4
with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	dbms_random.normal
from
	g1
;


prompt	===================
prompt	Diff the two Insert
prompt	===================

select count(*)
from (
select * from t3
minus
select * from t4
union all
select * from t4
minus
select * from t3
)
;


prompt	===========
prompt	Sum of CTAS
prompt	===========

select sum(normal) from t1;

prompt	=============
prompt	Sum of Insert
prompt	=============

select sum(normal) from t3;


execute dbms_random.seed(0)

prompt	=============
prompt	Sum of select
prompt	=============

with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select sum(n) from (
select
	dbms_random.normal n
from
	g1
)
;


I’m repeatedly using dbms_random.seed(0) to reset the random number generator and trying to generate 10,000 normally distributed numbers. (I’ve chosen the normal distribution because that happened to be the function in a script I sent someone with the comment that “this will recreate the data for the demonstration” – and they wrote back to say that it didn’t.)

I’ve got two “create as select”, and two “insert as select”. One of each pair selects from a real existing table to get 10,000 rows, the other uses the “select dual connect by” trick to generate rows. I’ve written SQL that shows whether or not the two pairs of tables end up with the same data (they do, pairwise), then I’ve summed one table from each pair to see if the different mechanisms produce the same data – and that depends on the version of Oracle you’re using. Finally I’ve reset the random number generator and summed across a pure select to see what that produces.

If you run this code on 12.2.0.1 or earlier you’ll see that the “diffs” report zeros and the “sums” report -160.39249. If you upgrade to 18.3 the diffs will still report zeros and some of the sums will still report -160.39249 but the sum of the CTAS will report -91.352172.

Bottom Line

If you’ve got code that you wrote to create reproducible test cases and the code uses: “create table … as select … dbms_random …” then it won’t produce the same data when you upgrade to 18.3. You’ll have to modify the code to do “create table (); insert as select …”.

As of this afternoon I have 1,209 test scripts on my laptop that use the dbms_random package to model data distribution patterns. It is almost certain that I will end up modifying every single one of them eventually.

There are words to express how I feel about this – but not ones that I would consider publishing.

Join Cardinality – 2

Fri, 2018-10-05 09:37

In the previous note I posted about Join Cardinality I described a method for calculating the figure that the optimizer would give for the special case where you had a query that:

  • joined two tables
  • used a single-column to join on equality
  • had no nulls in the join columns
  • had a perfect frequency histogram on the columns at the two ends of the join
  • had no filter predicates associated with either table

The method simply said: “Match up rows from the two frequency histograms, multiply the corresponding frequencies” and I supplied a simple SQL statement that would read and report the two sets of histogram data, doing the arithmetic and reporting the final cardinality for you. In an update I also added an adjustment needed in 11g (or, you might say, removed in 12c) where gaps in the histograms were replaced by “ghost rows” with a frequency that was half the lowest frequency in the histogram.

This is a nice place to start as the idea is very simple, and it’s likely that extensions of the basic idea will be used in all the other cases we have to consider. There are 25 possibilities that could need separate testing – though only 16 of them ought to be relevant from 12c onwards. Oracle allows for four kinds of histograms – in order of how precisely they describe the data they are:

  • Frequency – with a perfect description of the data
  • Top-N (a.k.a. Top-Frequency) – which describes all but a tiny fraction (ca. one bucket’s worth) of data perfectly
  • Hybrid – which can (but doesn’t usually, by default) describe up to 2,048 popular values perfectly and gives an approximate distribution for the rest
  • Height-balanced – which can (but doesn’t usually, by default) describe at most 1,024 popular values with some scope for misinformation.

Finally, of course, we have the general case of no histogram, using only 4 numbers (low value, high value, number of rows, number of distinct values) to give a rough picture of the data – and the need for histograms appears, of course, when the data doesn’t look anything like an even distribution of values between the low and high with close to “number of rows”/”number of distinct values” for each value.

So there are 5 possible statistical descriptions for the data in a column – which means there are 5 * 5 = 25 possible options to consider when we join two columns, or 4 * 4 = 16 if we label height-balanced histograms as obsolete and ignore them (which would be a pity because Chinar has done some very nice work explaining them).

Of course, once we’ve worked out a single-column equijoin between two tables there are plenty more options to consider:  multi-column joins, joins involving range-based predicates, joins involving more than 2 tables, and queries which (as so often happens) have predicates which aren’t involved in the joins.

For the moment I’m going to stick to the simplest case – two tables, one column, equality – and comment on the effects of filter predicates. It seems to be very straightforward as I’ll demonstrate with a new model

rem
rem     Script:         freq_hist_join_03.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem

execute dbms_random.seed(0)

create table t1(
        id      number(8,0),
        n0040   number(4,0),
        n0090   number(4,0),
        n0190   number(4,0),
        n0990   number(4,0),
        n1      number(4,0)
)
;

create table t2(
        id      number(8,0),
        n0050   number(4,0),
        n0110   number(4,0),
        n0230   number(4,0),
        n1150   number(4,0),
        n1      number(4,0)
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   40) + 1                   n0040,
        mod(rownum,   90) + 1                   n0090,
        mod(rownum,  190) + 1                   n0190,
        mod(rownum,  990) + 1                   n0990,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   50) + 1                   n0050,
        mod(rownum,  110) + 1                   n0110,
        mod(rownum,  230) + 1                   n0230,
        mod(rownum, 1150) + 1                   n1150,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
end;
/

You’ll notice that in this script I’ve created empty tables and then populated them. This is because of an anomaly that appeared in 18.3 when I used “create as select”, and should allow the results from 18.3 be an exact match for 12c. You don’t need to pay much attention to the Nxxx columns, they were there so I could experiment with a few variations in the selectivity of filter predicates.

Given the purpose of the demonstration I’ve gathered histograms on the column I’m going to use to join the tables (called n1 in this case), and here are the summary results:


TABLE_NAME           COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS
-------------------- -------------------- --------------- ------------ -----------
T1                   N1                   FREQUENCY                119         119
T2                   N1                   FREQUENCY                124         124

     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2488       2619    6,516,072
         1       2693       2599    6,999,107
         2       2635       2685    7,074,975
         3       2636       2654    6,995,944
...
       113          1          3            3
       115          1          2            2
       116          4          3           12
       117          1          1            1
       120          1          2            2
                                 ------------
sum                               188,114,543

We’ve got frequencyy histograms, and we can see that they don’t have a perfect overlap. I haven’t printed every single line from the cardinality query, just enough to show you the extreme skew, a few gaps, and the total. So here are three queries with execution plans:


set serveroutput off

alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));


select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
and     t2.n1150 = 25
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

I’ve queried the pure join – the count was exactly the 188,114,543 predicted by the cardinality query, of course – then I’ve applied a filter to one table, then to both tables. The first filter n0990 = 20 will (given the mod(,990)) definition identify one row in 990 from the original 100,000 in t1; the second filter n1150 = 25 will identify one row in 1150 from t2. That’s filtering down to 101 rows and 87 rows respectively from the two tables. So what do we see in the plans:


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:23.47 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:23.47 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    188M|    188M|00:00:23.36 |     748 |  6556K|  3619K| 8839K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.02 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.02 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    190K|    200K|00:00:00.02 |     748 |  2715K|  2715K| 1647K (0)|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T1"."N0990"=20)



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    165 |    165 |00:00:00.01 |     748 |  2715K|  2715K| 1678K (0)|
|*  3 |    TABLE ACCESS FULL| T2   |      1 |     87 |     87 |00:00:00.01 |     374 |       |       |          |
|*  4 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T2"."N1150"=25)
   4 - filter("T1"."N0990"=20)


The first execution plan shows an estimate of 188M rows – but we’ll have to check the trace file to confirm whether that’s only an approximate match to our calculation, or whether it’s an exact match. So here’s the relevant pair of lines:


Join Card:  188114543.000000 = outer (100000.000000) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 188114543 Computed: 188114543.000000

Yes, the cardinality calculation and the execution plan estimates match perfectly. But there are a couple of interesting things to note. First, Oracle seems to be deriving the cardinality by multiplying the individual cardinalities of the two tables with a figure it calls “sel” – the thing that Chinar Aliyev has labelled Jsel the “Join Selectivity”. Secondly, Oracle can’t do arithmetic (or, removing tongue from cheek) the value it’s reported for the join selectivity is reported at only 6 decimal places, but stored to far more. What is the Join Selectivity, though ? It’s the figure we derive from the cardinality SQL divided by the cardinality of the cartesian join of the two tables – i.e. 188,114,543 / (100,000 * 100,000).

With the clue from the first trace file, can we work out why the second and third plans show 190K and 165 rows respectively. How about this – multiply the filtered cardinalities of the two separate tables, then multiply the result by the join selectivity:

  • 1a)   n0990 = 20: gives us 1 row in every 990.    100,000 / 990 = 101.010101…    (echoing the rounded execution plan estimate).
  • 1b)   100,000 * (100,000/990) * 0.0188114543 = 190,014.69898989…    (which is in the ballpark of the plan and needs confirmation from the trace file).

 

  • 2a)   n1150 = 25: gives us 1 row in every 1,150.    100,000 / 1,150 = 86.9565217…    (echoing the rounded execution plan estimate)
  • 2b)   (100,000/990) * (100,000/1,150) * 0.0188114543 = 165.2301651..    (echoing the rounded execution plan estimate).

Cross-checking against extracts from the 10053 trace files:


Join Card:  190014.689899 = outer (101.010101) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 190015 Computed: 190014.689899

Join Card:  165.230165 = outer (86.956522) * inner (101.010101) * sel (0.018811)
Join Card - Rounded: 165 Computed: 165.230165

Conclusion.

Remembering that we’re still looking at very simple examples with perfect frequency histograms: it looks as if we can work out a “Join Selectivity” (Jsel) – the selectivity of a “pure” unfiltered join of the two tables – by querying the histogram data then use the resulting value to calculate cardinalities for simple two-table equi-joins by multiplying together the individual (filtered) table cardinality estimates and scaling by the Join Selectivity.

Acknowledgements

Most of this work is based on a document written by Chinar Aliyev in 2016 and presented at the Hotsos Symposium the same year. I am most grateful to him for responding to a recent post of mine and getting me interested in spending some time to get re-acquainted with the topic. His original document is a 35 page pdf file, so there’s plenty more material to work through, experiment with, and write about.

 

Join Cardinality

Wed, 2018-10-03 06:01

Following up my “Hacking for Skew” article from a couple of days ago, Chinar Aliyev has written an article about a method for persuading the optimizer to calculate the correct cardinality estimate without using any undocumented, or otherwise dubious, mechanisms. His method essentially relies on the optimizer’s mechanism for estimating join cardinality when there are histograms at both ends of the join, so I thought I’d write a short note describing the simplest possible example of the calculation – an example where the query is a single column equi-join with no nulls in either column and a perfect frequency histograms at both ends of the join.  (For a detailed description of more general cases I always refer to the work done by Alberto Dell’Era a few years ago). We start with two data sets that exhibit a strong skew in their data distributions:


execute dbms_random.seed(0)

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      n1
from
        generator       v1
;

create table t2
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      n1
from
        generator       v1
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 254'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 254'
        );
end;
/


I’ve generated two tables of 10,000 randomly generated values using the dbms_random.normal() function, but I’ve scaled the value up by a factor of three and taken the absolute value – which has given me a range of 12 distinct integer values with a nicely skewed distribution. Then I’ve gathered stats requesting histograms of up to 254 buckets. Since I’ve tested this only on versions from 11.2.0.4 onwards this means I’ll get a perfect histogram on the n1 columns on both tables.

Now I’m going run a query that reports the values and frequencies from the two tables by querying user_tab_histograms using a variant of an analytic query I published a long time ago to convert the cumulative frequencies recorded as the endpoint values into simple frequencies. If, for some reason, this query doesn’t run very efficiently in your tests you could always /*+ materialize */ the two factored subqueries (CTEs – common table expressions):


prompt  =======================================================================
prompt  Multiply and sum matching frequencies. An outer join is NOT needed
prompt  because rows that don't match won't contributed to the join cardinality
prompt  =======================================================================

break on report skip 1
compute sum of product on report
column product format 999,999,999

with f1 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T1'
and     column_name = 'N1'
order by
        endpoint_value
),
f2 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'N1'
order by
        endpoint_value
)
select
        f1.value, f1.frequency, f2.frequency, f1.frequency * f2.frequency product
from
        f1, f2
where
        f2.value = f1.value
;


     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2658       2532    6,730,056
         1       2341       2428    5,683,948
         2       1828       1968    3,597,504
         3       1305       1270    1,657,350
         4        856        845      723,320
         5        513        513      263,169
         6        294        249       73,206
         7        133        117       15,561
         8         40         54        2,160
         9         23         17          391
        10          5          5           25
        11          4          2            8
                                 ------------
sum                                18,746,698

As you can see, the two columns do have a highly skewed data distribution. The pattern of the two data sets is similar though the frequencies aren’t identical, of course. The total I get from this calculation is (I claim) the cardinality (rows) estimate that the optimizer will produce for doing an equi-join on these two tables – so let’s see the test:


set serveroutput off
alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
alter session set statistics_level = typical;
alter session set events '10053 trace name context off';

And the resulting output:

Session altered.
Session altered.


  COUNT(*)
----------
  18746698


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0wxytnyqs4b5j, child number 0
-------------------------------------
select  count(*) from  t1, t2 where  t1.n1 = t2.n1

Plan hash value: 906334482
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:03.23 |      40 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:03.23 |      40 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |     18M|     18M|00:00:02.96 |      40 |  2616K|  2616K| 2098K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |  10000 |  10000 |00:00:00.01 |      20 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |  10000 |  10000 |00:00:00.01 |      20 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")

As we can see, the estimate for the hash join is “18M” which is in the right ballpark but, in its current format, isn’t entirely helpful which is why I’ve enabled the 10053 trace to get an exact figure from the trace file, and this is what we see:


***********************
Best so far:  Table#: 0  cost: 4.352468  card: 9487.000000  bytes: 28461.000000
              Table#: 1  cost: 378.482370  card: 18467968.000000  bytes: 110807808.000000
***********************

The optimizer’s estimate is exactly the sum of the products of the frequencies of matching values from the (frequency) histogram data. There is a simple rationale for this – it gets the right answer. For each row in t1 with value ‘X’ the (frequency) histogram on t2 tells Oracle how many rows will appear in the join, so multiplying the frequency of ‘X’ in t1 by the frequency of ‘X’ in t2 tells Oracle how many rows the ‘X’s will contribute to the join. Repeat for every distinct value that appears in both (frequency) histograms and sum the results.

As a refinement on this (very simple) example, let’s delete data from the two tables so that we have rows in t1 that won’t join to anything in t2, and vice versa – then re-gather stats, query the histograms, and check the new prediction. We want to check whether a value that appears in the t1 histogram contributes to the join cardinality estimate even if there are no matching values in the t2 histogram (and vice versa):


delete from t1 where n1 = 4;
delete from t2 where n1 = 6;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 254', no_invalidate=>false)
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for all columns size 254', no_invalidate=>false)

with f1 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T1'
and     column_name = 'N1'
order by
        endpoint_value
),
f2 as (
select
        endpoint_value                                                            value,
        endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency
from
        user_tab_histograms
where
        table_name  = 'T2'
and     column_name = 'N1'
order by
        endpoint_value
)
select
        f1.value, f1.frequency, f2.frequency, f1.frequency * f2.frequency product
from
        f1, f2
where
        f2.value = f1.value
;


set serveroutput off
alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
alter session set statistics_level = typical;
alter session set events '10053 trace name context off';

And the output – with a little cosmetic tidying:


856 rows deleted.
249 rows deleted.

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.


     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2658       2532    6,730,056
         1       2341       2428    5,683,948
         2       1828       1968    3,597,504
         3       1305       1270    1,657,350
         5        513        513      263,169
         7        133        117       15,561
         8         40         54        2,160
         9         23         17          391
        10          5          5           25
        11          4          2            8
                                 ------------
sum                                17,950,172


Session altered.
Session altered.


  COUNT(*)
----------
  17950172


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0wxytnyqs4b5j, child number 0
-------------------------------------
select  count(*) from  t1, t2 where  t1.n1 = t2.n1

Plan hash value: 906334482
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:02.89 |      40 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:02.89 |      40 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |     17M|     17M|00:00:02.61 |      40 |  2616K|  2616K| 2134K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |   9144 |   9144 |00:00:00.01 |      20 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |   9751 |   9751 |00:00:00.01 |      20 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")


From the 10053 trace file:
***********************
Best so far:  Table#: 0  cost: 4.340806  card: 9144.000000  bytes: 27432.000000
              Table#: 1  cost: 368.100010  card: 17950172.000000  bytes: 107701032.000000
***********************

You can see from the frequency histogram report that we “lost” values 4 and 6 from the report; then the total from the report matches the actual number of rows returned by the query, and the cardinality estimate in the plan is again in the right ballpark – with the trace file showing an exact match.

I’ve run this test on 11.2.0.4,  12.1.0.2,  12.2.0.1 and  18.3.0.0 – and there’s an anomaly that appears in 11.2.0.4 (though maybe that should be “disappeared from”): the optimizer’s estimate for the cardinality was 17,952,157 – too large by 1,985. If you read the original document by Alberto Dell’Era you’ll see that he does mention a component of the calculation that seems to be redundant and possibly shouldn’t be there. I haven’t checked the details yet, but maybe that little difference in 11.2.0.4 is exactly the component that shouldn’t be there and isn’t there any more.

Conclusion:

For an incredibly simple class of queries with perfect frequency histograms there’s a very simple way to calculate the cardinality estimate that the optimizer will predict. Match up rows from the two frequency histograms, multiply the corresponding frequencies (making sure you don’t multiply the cumulative frequencies), and sum.

This is, of course, only a tiny step in the direction of seeing how Oracle uses histograms and covers only a type of query that is probably too simple to appear in a production system, but it’s a basis on which I may build in future notes over the next few weeks.

Case Study

Sun, 2018-09-30 13:59

A question about reading execution plans and optimising queries arrived on the ODC database forum a little while ago; the owner says the following statement is taking 14 minutes to return 30,000 rows and wants some help understanding why.

If you look at the original posting you’ll see that we’ve been given the text of the query and the execution plan including rowsource execution stats. There’s an inconsistency between the supplied information and the question asked, and I’ll get back to that shortly, but to keep this note fairly short I’ve excluded the 2nd half of the query (which is a UNION ALL) because the plan says the first part of the query took 13 minutes and 20 second and the user is worried about a total of 14 minutes.

SELECT /*+ gather_plan_statistics*/ DISTINCT
                rct.org_id,
                hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1 order_number,
                rct.customer_trx_id,
                rct.trx_number,
                rct.trx_date,
                rctd.gl_date,
                rct.creation_date,
                rctl.line_number,
                rct.invoice_currency_code inv_currency,
                (
                       SELECT SUM (rct_1.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_1
                       WHERE  rct_1.customer_trx_id = rct.customer_trx_id
                       AND    rct_1.line_type = 'LINE') inv_net_amount,
                (
                       SELECT SUM (rct_2.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_2
                       WHERE  rct_2.customer_trx_id = rct.customer_trx_id
                       AND    rct_2.line_type = 'TAX') inv_tax_amount,
                (
                       SELECT SUM (rct_3.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_3
                       WHERE  rct_3.customer_trx_id = rct.customer_trx_id) inv_gross_amount,
                gll.currency_code                                    func_currency,
                Round((
                        (
                        SELECT SUM (rct_4.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_4
                        WHERE  rct_4.customer_trx_id = rct.customer_trx_id
                        AND    rct_4.line_type = 'LINE')*gdr.conversion_rate),2) func_net_amount,
                Round((
                        (
                        SELECT SUM (rct_5.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_5
                        WHERE  rct_5.customer_trx_id = rct.customer_trx_id
                        AND    rct_5.line_type = 'TAX')*gdr.conversion_rate),2) func_tax_amount,
                Round((
                        (
                        SELECT SUM (rct_6.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_6
                        WHERE  rct_6.customer_trx_id = rct.customer_trx_id)*gdr.conversion_rate),2) func_gross_amount,
                glcc.segment1                                                                 company,
                glcc.segment2                                                                 account,
                hg.geography_name                                                             billing_country,
                gdr.conversion_rate
FROM            apps.hz_parties hzp,
                apps.hz_cust_accounts hca,
                apps.ra_customer_trx_all rct,
                apps.ra_customer_trx_lines_all rctl,
                apps.ra_cust_trx_line_gl_dist_all rctd,
                apps.gl_code_combinations_kfv glcc,
                apps.hz_cust_site_uses_all hcsua,
                apps.hz_cust_acct_sites_all hcasa,
                apps.hz_party_sites hps,
                apps.hz_locations hl,
                apps.hz_geographies hg,
                apps.gl_ledgers gll,
                apps.gl_daily_rates gdr
WHERE           hzp.party_id = hca.party_id
AND             hca.cust_account_id = rct.bill_to_customer_id
AND             hca.cust_account_id = hcasa.cust_account_id
AND             rct.customer_trx_id = rctl.customer_trx_id
AND             rctl.customer_trx_line_id = rctd.customer_trx_line_id
AND             glcc.code_combination_id = rctd.code_combination_id
AND             rct.bill_to_site_use_id = hcsua.site_use_id
AND             hcsua.cust_acct_site_id = hcasa.cust_acct_site_id
AND             hcasa.party_site_id = hps.party_site_id
AND             hps.location_id = hl.location_id
AND             hl.country = hg.country_code
AND             hg.geography_type = 'COUNTRY'
AND             rctl.line_type = 'TAX'
AND             gll.ledger_id = rct.set_of_books_id
AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             gdr.conversion_type = 'Corporate'
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')
AND             glcc.segment1 = '2600'
AND             glcc.segment2 = '206911'
GROUP BY        hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1,
                rct.trx_number,
                rct.trx_date,
                rct.creation_date,
                rctl.line_number,
                rctl.unit_selling_price,
                rct.org_id,
                rctd.gl_date,
                rct.customer_trx_id,
                glcc.segment1,
                glcc.segment2,
                hg.geography_name,
                rct.invoice_currency_code,
                gll.currency_code,
                gdr.conversion_rate 

We note that there are six scalar subqueries in the text I’ve reported – and they form two groups of three, and the difference between the two groups is that one group is multiplied by a conversion rate while the other isn’t; moreover in each group the three subqueries are simply querying subsets of the same correlated data set. So it looks as if all 6 scalar subqueries could be eliminated and replaced by the inclusion of an aggregate view in the from clause and the projection of 6 columns from that view.

However, before pursuing that option, take a look at the plan with the rowsource execution stats – where is the time going ?


-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                                           |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   1 |  UNION-ALL                                                 |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   2 |   HASH UNIQUE                                              |                              |      1 |      1 |    501 |00:13:20.17 |    3579K|  
|   3 |    HASH GROUP BY                                           |                              |      1 |      1 |  19827 |00:13:20.15 |    3579K|  
|   4 |     NESTED LOOPS                                           |                              |      1 |        |  21808 |00:13:10.26 |    3579K|  
|   5 |      NESTED LOOPS                                          |                              |      1 |      1 |  21808 |00:13:10.11 |    3578K|  
|   6 |       NESTED LOOPS OUTER                                   |                              |      1 |      1 |  21808 |00:13:09.90 |    3576K|  
|   7 |        NESTED LOOPS OUTER                                  |                              |      1 |      1 |  21808 |00:13:09.25 |    3501K|  
|   8 |         NESTED LOOPS OUTER                                 |                              |      1 |      1 |  21808 |00:13:08.48 |    3426K|  
|   9 |          NESTED LOOPS OUTER                                |                              |      1 |      1 |  21808 |00:13:07.66 |    3333K|  
|  10 |           NESTED LOOPS OUTER                               |                              |      1 |      1 |  21808 |00:13:06.92 |    3258K|  
|  11 |            NESTED LOOPS OUTER                              |                              |      1 |      1 |  21808 |00:13:06.08 |    3183K|  
|  12 |             NESTED LOOPS                                   |                              |      1 |      1 |  21808 |00:13:04.69 |    3090K|  
|  13 |              NESTED LOOPS                                  |                              |      1 |      1 |  21808 |00:13:05.75 |    3026K|  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  16 |                 NESTED LOOPS                               |                              |      1 |    351 |  33459 |00:00:03.67 |    1025K|  
|  17 |                  NESTED LOOPS                              |                              |      1 |    351 |  33459 |00:00:03.06 |     926K|  
|  18 |                   NESTED LOOPS                             |                              |      1 |    351 |  33459 |00:00:02.47 |     827K|  
|* 19 |                    HASH JOIN                               |                              |      1 |    351 |  33459 |00:00:01.90 |     730K|  
|  20 |                     TABLE ACCESS FULL                      | GL_LEDGERS                   |      1 |     38 |     39 |00:00:00.01 |      15 |  
|  21 |                     NESTED LOOPS                           |                              |      1 |        |  33459 |00:00:01.75 |     730K|  
|  22 |                      NESTED LOOPS                          |                              |      1 |    351 |  33459 |00:00:01.44 |     696K|  
|  23 |                       NESTED LOOPS                         |                              |      1 |    351 |  33459 |00:00:01.11 |     646K|  
|* 24 |                        HASH JOIN                           |                              |      1 |    385 |  33459 |00:00:00.40 |     526K|  
|* 25 |                         TABLE ACCESS BY INDEX ROWID BATCHED| GL_CODE_COMBINATIONS         |      1 |     35 |      1 |00:00:00.01 |     108 |  
|* 26 |                          INDEX RANGE SCAN                  | GL_CODE_COMBINATIONS_N2      |      1 |    499 |     77 |00:00:00.01 |       3 |  
|* 27 |                         TABLE ACCESS BY INDEX ROWID BATCHED| RA_CUST_TRX_LINE_GL_DIST_ALL |      1 |    651K|   1458K|00:00:02.22 |     526K|  
|* 28 |                          INDEX RANGE SCAN                  | RA_CUST_TRX_LINE_GL_DIST_N2  |      1 |    728K|   1820K|00:00:01.60 |   11147 |  
|* 29 |                        TABLE ACCESS BY INDEX ROWID         | RA_CUSTOMER_TRX_LINES_ALL    |  33459 |      1 |  33459 |00:00:00.53 |     119K|  
|* 30 |                         INDEX UNIQUE SCAN                  | RA_CUSTOMER_TRX_LINES_U1     |  33459 |      1 |  33459 |00:00:00.31 |   86364 |  
|* 31 |                       INDEX UNIQUE SCAN                    | RA_CUSTOMER_TRX_U1           |  33459 |      1 |  33459 |00:00:00.21 |   49850 |  
|  32 |                      TABLE ACCESS BY INDEX ROWID           | RA_CUSTOMER_TRX_ALL          |  33459 |      1 |  33459 |00:00:00.20 |   33459 |  
|  33 |                    TABLE ACCESS BY INDEX ROWID             | HZ_CUST_ACCOUNTS             |  33459 |      1 |  33459 |00:00:00.42 |   97887 |  
|* 34 |                     INDEX UNIQUE SCAN                      | HZ_CUST_ACCOUNTS_U1          |  33459 |      1 |  33459 |00:00:00.24 |   64428 |  
|  35 |                   TABLE ACCESS BY INDEX ROWID              | HZ_PARTIES                   |  33459 |      1 |  33459 |00:00:00.44 |   98783 |  
|* 36 |                    INDEX UNIQUE SCAN                       | HZ_PARTIES_U1                |  33459 |      1 |  33459 |00:00:00.26 |   65175 |  
|  37 |                  TABLE ACCESS BY INDEX ROWID               | HZ_CUST_SITE_USES_ALL        |  33459 |      1 |  33459 |00:00:00.46 |   98374 |  
|* 38 |                   INDEX UNIQUE SCAN                        | HZ_CUST_SITE_USES_U1         |  33459 |      1 |  33459 |00:00:00.28 |   64915 |  
|* 39 |                 TABLE ACCESS BY INDEX ROWID                | HZ_CUST_ACCT_SITES_ALL       |  33459 |      1 |  33459 |00:00:00.45 |   98195 |  
|* 40 |                  INDEX UNIQUE SCAN                         | HZ_CUST_ACCT_SITES_U1        |  33459 |      1 |  33459 |00:00:00.26 |   64736 |  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
|  43 |               TABLE ACCESS BY INDEX ROWID                  | HZ_PARTY_SITES               |  21808 |      1 |  21808 |00:00:00.35 |   64339 |  
|* 44 |                INDEX UNIQUE SCAN                           | HZ_PARTY_SITES_U1            |  21808 |      1 |  21808 |00:00:00.23 |   42531 |  
|  45 |              TABLE ACCESS BY INDEX ROWID                   | HZ_LOCATIONS                 |  21808 |      1 |  21808 |00:00:00.33 |   64353 |  
|* 46 |               INDEX UNIQUE SCAN                            | HZ_LOCATIONS_U1              |  21808 |      1 |  21808 |00:00:00.18 |   42545 |  
|  47 |             VIEW PUSHED PREDICATE                          | VW_SSQ_1                     |  21808 |      1 |  21808 |00:00:01.17 |   93476 |  
|  48 |              SORT GROUP BY                                 |                              |  21808 |      1 |  21808 |00:00:01.06 |   93476 |  
|  49 |               TABLE ACCESS BY INDEX ROWID BATCHED          | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.84 |   93476 |  
|* 50 |                INDEX RANGE SCAN                            | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.36 |   59938 |  
|  51 |            VIEW PUSHED PREDICATE                           | VW_SSQ_2                     |  21808 |      1 |  21808 |00:00:00.69 |   74433 |  
|  52 |             SORT GROUP BY                                  |                              |  21808 |      1 |  21808 |00:00:00.59 |   74433 |  
|  53 |              TABLE ACCESS BY INDEX ROWID BATCHED           | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.49 |   74433 |  
|* 54 |               INDEX RANGE SCAN                             | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.24 |   59903 |  
|  55 |           VIEW PUSHED PREDICATE                            | VW_SSQ_3                     |  21808 |      1 |  21808 |00:00:00.61 |   74852 |  
|  56 |            SORT GROUP BY                                   |                              |  21808 |      1 |  21808 |00:00:00.51 |   74852 |  
|  57 |             TABLE ACCESS BY INDEX ROWID BATCHED            | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.38 |   74852 |  
|* 58 |              INDEX RANGE SCAN                              | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.19 |   59148 |  
|  59 |          VIEW PUSHED PREDICATE                             | VW_SSQ_4                     |  21808 |      1 |  21808 |00:00:00.70 |   93490 |  
|  60 |           SORT GROUP BY                                    |                              |  21808 |      1 |  21808 |00:00:00.61 |   93490 |  
|  61 |            TABLE ACCESS BY INDEX ROWID BATCHED             | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.63 |   93490 |  
|* 62 |             INDEX RANGE SCAN                               | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.25 |   59950 |  
|  63 |         VIEW PUSHED PREDICATE                              | VW_SSQ_5                     |  21808 |      1 |  21808 |00:00:00.63 |   74427 |  
|  64 |          SORT GROUP BY                                     |                              |  21808 |      1 |  21808 |00:00:00.54 |   74427 |  
|  65 |           TABLE ACCESS BY INDEX ROWID BATCHED              | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.44 |   74427 |  
|* 66 |            INDEX RANGE SCAN                                | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.21 |   59900 |  
|  67 |        VIEW PUSHED PREDICATE                               | VW_SSQ_6                     |  21808 |      1 |  21808 |00:00:00.59 |   74846 |  
|  68 |         SORT GROUP BY                                      |                              |  21808 |      1 |  21808 |00:00:00.50 |   74846 |  
|  69 |          TABLE ACCESS BY INDEX ROWID BATCHED               | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.35 |   74846 |  
|* 70 |           INDEX RANGE SCAN                                 | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.17 |   59144 |  
|* 71 |       INDEX RANGE SCAN                                     | HZ_GEOGRAPHIES_N11           |  21808 |   5812 |  21808 |00:00:00.13 |    2684 |  
|  72 |      TABLE ACCESS BY INDEX ROWID                           | HZ_GEOGRAPHIES               |  21808 |    168 |  21808 |00:00:00.07 |     620 |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

Let’s start by raising some concerns about the quality of information available.

First, the OP says it takes 14 minutes to return 30,000 rows: but the top line of the plan says it has taken 13 minutes and 20 seconds to return the first 501 rows, and if we look a little further down the plan operation 3 (Hash Group By) reports 00:13:20.15 to aggregate down to 19,827 rows. So this half of the plan cannot return more than 19,827 rows, and the half I have discarded (for the moment) must be returning the other 10,000+ rows. The information we have is incomplete.

Of course you may think that whatever the rest of the plan does is fairly irrelevant – it’s only going to be responsible for at most another 40 seconds of processing – except my previous experience of rowsource execution statistics tells me that when you do a large number of small operations the times reported can be subject to fairly large rounding errors and that enabling the measurement can increase the execution time by a factor of three or four. It’s perfectly feasible that this half of the query is actually the faster half under normal run-time circumstances but runs much more slowly (with a much higher level of CPU utilisation) when rowsource execution stats is in enabled. So let’s not get too confident.

With that warning in mind, what can we see in this half of the plan.

Big picture: the inline scalar subqueries have disappeared. In 12c the optimimzer can unnest scalar subqueries in the select list and turn them into outer joins, and we can see that there are 6 “Nested Loop Outer” operations, corresponding to 6 “View Pushed Predicate” operations against views labelled VW_SSQ1 through to VW_SSQ6 (SSQ = Scalar Sub Query ?). This goes back to my early comment – a person could probably rewrite the 6 scalar subqueries as a single aggregate view in the from clause: the optimizer isn’t quite clever enough to manage that in this case, but in simpler cases it might be able to do exactly that.

Big picture 2: most of the 13 minutes 20 seconds appears at operation 14 as it processes the 33,459 rows supplied to it from the 4.33 seconds of work done by operation 15 and its descendants. Reducing this part of the execution plan to the smallest relevant section we get the following:

-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

For each row supplied by operation 15 Oracle calls operation 41, which calls operation 42 to do an index range scan to supply a set of rowids so that operation 41 can access a table and return rows. Apparently the total time spent by operation 41 waiting for operation 42 to return rowids and then doing its own work is 12 minutes 44 seconds, while the range scans alone (all 33,459 of them) take 13 minutes and 8 seconds. Remember, though, that “lots of small operations = scope of rounding errors” when you look at these timings. Despite the inconsistency between the timings for operations 41 and 42 it’s reasonable to conclude that between them that’s where most of the execution time went.

Two questions – (a) can we refine our analysis of how the time is split between the two operations and (b) why do these lines take so much time.

Check the Starts and the A-rows: (reminder: for comparison, we expect A-rows to be approximately E-rows * Starts) for both operations we see 33,459 starts and 21,808 rows. The index range scans return (on average) a single rowid about two-thirds of the time, and every time a range scan returns a rowid the corresponding row is returned from the table (If you check the Id column there’s no asterisk on operation 41 so no extra predicate is applied as Oracle accesses the table row – but even if there were an extra predicate we’d still be happy to infer that if 21,808 rowids returned from operation 42 turned into 21,808 rows returned from the table then there are no wasted accesses to the table).

Now look at the Buffers for the index range scan – 1.837M: that’s roughly 56 buffers per range scan – that’s a lot of index to range through to find one rowid, which is a good clue that perhaps we do a lot of work with each Start and really do use up a lot of CPU on this operation. Let’s see what the Predicate Section of the plan tells us about this range scan:


Predicate Information (identified by operation id):  
---------------------------------------------------  
  42 - access("GDR"."FROM_CURRENCY"="RCT"."INVOICE_CURRENCY_CODE" AND "GDR"."TO_CURRENCY"="GLL"."CURRENCY_CODE" AND   
              "GDR"."CONVERSION_TYPE"='Corporate')  
       filter(("GDR"."CONVERSION_TYPE"='Corporate' AND TO_DATE(INTERNAL_FUNCTION("GDR"."CONVERSION_DATE"))=TO_DATE(INTERNAL_FUNCTION("RCTD"."  
              GL_DATE"))))  

We have access predicates (things which narrow down the number of leaf blocks that we walk through) and filter predicates (things we do to test every key entry we access). Notably the gdr.conversion type is a filter predciate as well as an access predicate – and that suggests that our set of predicates has “skipped over” a column in the index: from_currency and to_currency might be the first two columns in the index, but conversion_type is then NOT the third.

More significantly, though, there’s a column called conversion_date in the index (maybe that’s column 3 in the index – it feels like it ought to be); but for every index entry we’ve selected from the 56 blocks we walk through we do some sort of internal conversion (or un-translated transformation) to the column then convert the result to a date to compare it with another date (similarly processed from an earlier operation). What is that “internal function” – let’s check the query:


AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             gdr.conversion_type = 'Corporate'
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')

(I’ve swapped the order of a couple of lines to highlight a detail).

The filter predicate is comparing gdr.conversion_date with rctd.gl_date – and we can probably assume that both columns really are dates because (a) the word “date” is in their names and (b) the rctd.gl_date is being compared with genuine date values in the next predicate down (and – though I haven’t shown it – the way the plan reports the next predicate proves that the column really is a date datatype).

So the predicate in the SQL applies the to_date() function to two columns that are dates – which means the optimizer has to convert the date columns to some default character format and then convert them back to dates. The “internal function” is a to_char() call. Conversions between date and character formats are CPU-intensive, and we’re doing a double conversion (to_date(to_char(column_value)) to every data value in roughly 56 blocks of an index each time we call that line of the plan. It’s not surprising we spend a lot of time in that line.

Initial strategy:

Check the column types for those two columns, if they are both date types decide whether or not the predicate could be modified to a simple gdr.conversion_date = rctd.gl_date (though it’s possible that something slightly more sophisticated should be used) but whatever you do avoid the redundant conversion through character format.

Warning

Simply eliminating the to_date() calls may changes the results. Here’s a demonstration of how nasty things happen when you apply to_date() to a date:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 D1                                     DATE
 D2                                     DATE

SQL> insert into t1 values(sysdate, sysdate + 10/86400);

1 row created.

SQL> select * from t1 where d1 = d2;

no rows selected

SQL> select * from t1 where to_date(d1) = to_date(d2);

D1        D2
--------- ---------
30-SEP-18 30-SEP-18

1 row selected.

SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

Session altered.

SQL> select * from d1 where to_date(d1) = to_date(d2);

no rows selected

Different users could get different results because they have different settings for their nls_date_format.

Reminder

I started my analysis with two comments about the quality of information – first, we don’t really know whether or not this half of the union all would be responsble for most of the time if rowsource execution statistics were not enabled; secondly large number of small operations can lead to a lot of rounding errors in timing. There are six occurrences of unnested scalar subqueries which are all called 21,808 times – and the recorded time for all 6 of them is remarkably small given the number of executions, even when allowing for the precision with which they operate; it’s possible that these subqueries take a larger fraction of the total time than the plan indicates, in which case it might become necessary (rather than just nice) to do a manual unnesting and reduce the number of inline views to 3 (one for each line_type), 2 (one with, one without, conversion_rate) or just one.

Footnote

Once again I’ve spent a couple of hours writing notes to explain the thoughts that went through my mind in roughly 10 minutes of reading the original posting. It’s a measure really of how many bits of information you can pull together, and possibly discard, very quickly once you understand how many things the optimizer is capable of doing and how the execution plan tries to show you how a statement was (or will be) handled.

 

Hacking for Skew

Fri, 2018-09-28 07:23

In my presentation to the UKOUG SIG yesterday “Struggling with Statistics – part 2” I described a problem that I wrote about a few months ago: when you join a fact table with a massively skewed distribution on one of the surrogate key columns to a dimension holding the unique list of keys and descriptions a query against a description “loses” the skew. Here’s an demo of the problem that’s a little simpler than the one in the previous article.


rem
rem     Script:         bitmap_join_histogram.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem     Updated:        Sep 2018
rem 

execute dbms_random.seed(0)

create table facts
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual 
        connect by 
                level <= 1e4 --> comment to avoid wordpress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      id_status,
        lpad(rownum,10,'0')                     v1,
        lpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 --> comment to avoid wordpress format issue
;

alter table facts add constraint fct_pk primary key(id);
alter table facts modify id_status not null;

create table statuses
as
select
        id,
        chr(65 + id)            status_code,
        rpad('x',100,'x')       description
from    (
        select
                distinct(id_status)             id
        from
                facts
        )
;

alter table statuses modify status_code not null;

alter table statuses add constraint sta_pk primary key (id);
alter table facts add constraint fct_fk_sta foreign key (id_status) references statuses(id);

create bitmap index fct_b1 on facts(id_status);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'facts',
                method_opt       => 'for all columns size skewonly'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'statuses',
                method_opt       => 'for all columns size 254'
        );
end;
/

The definition of the facts.id_status column means I get a nice skewing effect on the data and this is what my data looks like:


select id_status, count(*) from facts group by id_status order by id_status;

 ID_STATUS   COUNT(*)
---------- ----------
         0      26050
         1      23595
         2      18995
         3      13415
         4       8382
         5       4960
         6       2643
         7       1202
         8        490
         9        194
        10         55
        11         17
        12          2

13 rows selected.

The statuses table translates the numbers 0 – 12 into the letters ‘A’ – ‘M’.

A quick check will show you that there are 55 rows for id_status = 10, which means 55 rows for status_code = ‘K’. So what happens when we write the two queries that should show us these results. I don’t really care what the execution plans are at this point, I’m interested only in the optimizer’s estimate of cardinality – so here are two queries, each followed by its execution plan:


select
        sum(fct.id)
from
        facts   fct
where
        fct.id_status = 10
;


-----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |        |     1 |     8 |    12   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                      |        |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| FACTS  |    55 |   440 |    12   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |        |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | FCT_B1 |       |       |            |          |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("FCT"."ID_STATUS"=10)


select
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'K'
;

--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    13 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    13 |            |          |
|*  2 |   HASH JOIN         |          |  7692 | 99996 |   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     5 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='K')

The estimated cardinality for the query against the base column reflects the value 55 from the histogram, but the estimated cardinality of the join is 7,692 – which is num_rows(facts) / num_distinct(id_status). Oracle has lost information about the skew. There is a way to get Oracle to produce a correct estimate (shown in the previous article) by rewriting the join as an IN subquery with the (undocumented) “precompute_subquery” hint, but there is an alternative which David Kurtz hypothesized in a conversation after the presentation was over (in fact someone else had described their use of exactly his suggested approach in a comment on a much older blog note about this problem): take the histogram from the id_status column on the facts table and “apply it” to the status_code column on the statuses table. In discussion with David I expressed the opinion that this probably shouldn’t work, and it wasn’t really a bit of fakery I’d want to apply to a production system – but we both tried it when we got home … with differing degrees of success.

Here’s a piece of code that I inserted into my script immediately after gathering stats on the statuses table. I’ll explain the details below as it makes a couple of assumptions that need to be pointed out:


declare

        srec                    dbms_stats.statrec;

        m_distcnt               number;
        m_density               number;
        m_nullcnt               number;
        m_avgclen               number;

        c_array                 dbms_stats.chararray;

begin

        dbms_stats.get_column_stats(
                ownname         => 'test_user',
                tabname         => 'facts',
                colname         => 'id_status',
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

        srec.bkvals := dbms_stats.numarray();
        c_array     := dbms_stats.chararray();

        for r in (
                select  stt.status_code, count(*) ct
                from    facts fct, statuses stt
                where   stt.id = fct.id_status
                group by
                        stt.status_code
                order by
                        stt.status_code
        ) loop

                c_array.extend;
                c_array(c_array.count) := r.status_code;
                srec.bkvals.extend;
                srec.bkvals(srec.bkvals.count) := r.ct;

        end loop;

        dbms_stats.prepare_column_values(srec, c_array);

        dbms_stats.set_column_stats(
                ownname         => 'test_user',
                tabname         => 'statuses',
                colname         => 'status_code',
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

end;
/

alter system flush shared_pool;

The code isn’t intended to be efficient, and I’ve been a bit lazy in setting up the content.

The first step gets the column stats from facts.id_status – and I know that I’ve got a frequency histogram that covers exactly the right number of distinct values on that column so almost everything is set up correctly to copy the stats across to statuses.status_code, except one column is numeric and the other is character and (although I know it’s true because of the way I defined the status_code values) I need to ensure that the bucket values I write to the status_code need to be arranged in alphabetic order of status_code.

So my second step is to run a query against the facts table to get the counts of status_code in alphabetical order and copy the results in order into a pair of arrays – one being a standalone array of the type defined in the dbms_stats package as an array of character types, the other being the array of bucket values that already exists in the stats record for the facts.id_status column that I’ve pulled into memory. (The bucket values array is stored as cumulative frequency values, so I do have to overwrite it with the simple frequency values at this point).

Finally I “prepare column values” and “set column stats” into the correct column, and the job is done. The flush of the shared pool is there to avoid any accidents of cursors surviving previous tests and causing confusion.

So what happens when I run a couple of queries with these faked stats in place ?

set autotrace traceonly explain

select  
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'K'
;


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          |    55 |   770 |   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     6 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='K')



select
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'D'
;


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          | 13415 |   183K|   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     2 |    12 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='D')

Querying for ‘K’ the prediction is 55 rows, querying for ‘D’ the prediction is for 13,415 rows – both estimates are exactly right. Wow !!!

Problem – that’s not what David Kurtz saw. In an email to me he said: “To my surprise, if I fake a histogram on the dimension table using the skew on the join column from the fact table I do get the correct number of rows calculated in the execution plan (provided it is less than the value if the histogram was not present)”. To make that concrete – when he queried for ‘K’ he got the correct prediction, when he queried for ‘D’ he was back to a prediction of 7,692. Looking at the report of the actual data, he’d get the right prediction for codes ‘F’ to ‘M’ and the wrong prediction for codes ‘A’ to ‘E’.

So what went wrong (and with whom) ?

When I run up new tests I tend to test Oracle versions in the order 12.1.0.2, then 11.2.0.4, then 12.2.0.1, then 18.3.0.0 – it’s the order of popularity that I currently see. So I was running my test on 12.1.0.2; David was running his test on 18.3.0.0. So I jumped a step and ran my test on 12.2.0.1: here are my results when querying for status_code = ‘D’:


--------------------------------------------------------------------------------
| Id  | Operation	    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	       |     1 |    14 |   233	 (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |	       |     1 |    14 |	    |	       |
|*  2 |   HASH JOIN	    |	       |  7692 |   105K|   233	 (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     6 |     2	 (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229	 (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='D')

As David has seen with 18.3, Oracle used the num_distinct to estimate the cardinality for  ‘D’. (It still used the value indicated by the histogram for ‘K’.) When I set the optimizer_features_enable parameter back to 12.1.0.2 the cardinality estimate for ‘D’ wentback to 13,415 – so it looks as if this is a deliberate piece of coding. 172 fix controls and 31 optimizer state parameters changed, but none of the more likely looking candidates had any effect when I tried testing them separately; possibly there’s a new sanity check when the number of rows recorded for the table is a long way off the total histogram bucket count.

I took a quick look at the 10053 trace in 12.2, with and without the change to optimizer_features_enable. The key difference was in the single table access path analysis – which didn’t give me any further clues.

With optimizer_features_enable = 12.1.0.2
=========================================
Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='K'
  Estimated selectivity: 5.5000e-04 , endpoint value predicate, col: #2

Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='D'
  Estimated selectivity: 0.134150 , endpoint value predicate, col: #2


With optimizer_features_enable defaulting to 12.2.0.1
=====================================================
Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='K'
  Estimated selectivity: 5.5000e-04 , endpoint value predicate, col: #2


Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='D'
  Estimated selectivity: 0.076923 , endpoint value predicate, col: #2


Bottom line on this – there’s at least one person who already uses this method to work around the optimizer limitation, they need to be careful when they upgrade to 12.2 (or above) as the method no longer works in all cases.

 

 

Column Group Catalog

Thu, 2018-09-27 11:16

I seem to have written a number of aricles about column groups – the rather special, and most useful, variant on extended stats. To make it as easy as possible to find the right article I’ve decided to produce a little catalogue (catalog) of all the relevant articles, with a little note about the topic each article covers. Some of the articles will link to others in the list, and there are a few items in the list from other blogs. There are also a few items which are the titles of drafts which have been hanging around for the last few years.

 

Column Stats

Wed, 2018-09-12 07:46

A little while ago I added a post-script to a note I’d written five years ago about gathering stats on a virtual column and had updated with a reference to a problem on the Oracle database forum that complained that stats collection had taken much longer after the addition of a function-based index. The problem related to the fact that the function-based index was supported by a virtual column that used an instr() function on a CLOB (XML) column – and gathering stats on the virtual column meant applying the function to every CLOB in the table.

So – my post-script, add about a month ago, suggested adding a preference (dbms_stats.set_table_preference) to avoid gathering stats on that column. There’s a problem with this suggestion – it doesn’t work because Oracle doesn’t play nicely when you try to limit the stats collection to a few columns – even in version 18.3. Here’s a demonstration of the effect; first we create a table that includes a column group (extended stats), a virtual column, and a function-based index – i.e. the three different ways of generating user-related virtual columns.


rem
rem     Script:         stats_struggle_06.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2018
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad(rownum,10,'0')             v2
from
        generator       v1
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

execute dbms_stats.delete_table_stats(user,'t1')

begin
        dbms_output.put_line(
                dbms_stats.create_extended_stats(
                        ownname         => user,
                        tabname         => 'T1',
                        extension       => '(v1, v2)'
                )
        );
end;
/

alter table t1 add id_12 
        generated always as (mod(id,12)) virtual
;

create index t1_id on t1(mod(id,10));


Since I’ve run this on 12c and 18c I’ve included a call to delete table stats after creating the table. So the next step is to enable SQL trace and see what Oracle does under the covers when we gather stats on just a couple of columns in the table:


alter session set events '10046 trace name context forever';

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 't1',
                method_opt  => 'for columns size 1 id v1',
                cascade     => false
        );
end;
/

alter session set events '10046 trace name context off';

column column_name  format a32
column data_default format a32

select 
        column_name, data_default,
        num_nulls, num_distinct, to_char(last_analyzed,'hh24:mi:ss') gathered
from    user_tab_cols 
where   table_name = 'T1' 
order by 
        internal_column_id
;

COLUMN_NAME                      DATA_DEFAULT                      NUM_NULLS NUM_DISTINCT GATHERED
-------------------------------- -------------------------------- ---------- ------------ --------
ID                                                                         0        10000 16:13:12
V1                                                                         0        10000 16:13:12
V2
SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7   SYS_OP_COMBINED_HASH("V1","V2")
ID_12                            MOD("ID",12)
SYS_NC00006$                     MOD("ID",10)

According to the output of the last query we’ve gathered stats only on the two columns specified. But have we really avoided the work ? Here, with some cosmetic tidying, is the SQL executed by the package:

select 
        /*+
                full(t) no_parallel(t) no_parallel_index(t) dbms_stats
                cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
                xmlindex_sel_idx_tbl no_substrb_pad 
         */
        to_char(count(ID)),
        substrb(dump(min(ID),16,0,64),1,240),
        substrb(dump(max(ID),16,0,64),1,240),
        to_char(count(V1)),
        substrb(dump(min(V1),16,0,64),1,240),
        substrb(dump(max(V1),16,0,64),1,240),
        to_char(count(V2)),
        to_char(count(SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7)),
        to_char(count(ID_12)),
        to_char(count(SYS_NC00006$))
from
        TEST_USER.T1 t  /* NDV,NIL,NIL,NDV,NIL,NIL,ACL,ACL,ACL,ACL*/

We can see that Oracle has done a count(), min() and max() on ID and V1, and the “comment” at the end of the text tells us that it’s applied the approximate_ndv mechanism to the first two columns queried but not the rest. However it has count()ed all the other columns – which means it’s evaluated their underlying expressions. So if you were hoping that limiting the columns gathered would avoid a really expensive function call, bad luck.

Threat / Bug alert

One odd little detail that came up when I ran a test case that used a deterministic function to generate a virtual column: in 12.1.0.2 the function was called once per row (possibly because every row had a different value) whether or not it was in the list of columns for gathering stats; in 18.3 the function was called nearly twice per row when I didn’t specificy stats gathering for the column and nearly 4 times per row when I did. This looks like it might be a change (possibly accidental) to how deterministic functions can cache their inputs and outputs – possibly something as “minor” as the size of the cache.

 

Stats time

Mon, 2018-09-10 07:37

I wrote a note a couple of years ago explaining how I used to get a rough idea (with some errors) of how much time was spent in the overnight stats collection by each object. One of the nice little enhancements that appeared in 12c was the appearance of a couple of functions that can report information about this type of thing, and more. These are the dbms_stats function report_stats_operations() and report_single_stats_operation() with the following definitions:


function report_stats_operations(
        detail_level  varchar2                  default 'TYPICAL',
        format        varchar2                  default 'TEXT', 
        latestN       number                    default null,
        since         timestamp with time zone  default null,
        until         timestamp with time zone  default null,
        auto_only     boolean                   default false,
        container_ids dbms_utility.number_array default dbms_stats.NULL_NUMTAB
) return clob;

function report_single_stats_operation(
        opid         number,
        detail_level varchar2 default 'TYPICAL', 
        format       varchar2 default 'TEXT', 
        container_id number   default null
) return clob;

As you can see, there are lots of options to generating the report of stats operations, and you can check the manuals or $ORACLE_HOME/rdbms/admin/dbmsstat.sql for information about how you can use it. One of the simplest options would be to run from SQL*Plus:

set long 1000000

set pagesize    0
set linesize  255
set trimspool on

column text_line format a254

select
        dbms_stats.report_stats_operations(
                since => sysdate - 3
        ) text_line
from dual
;

Of course you wouldn’t be able to pick the option that limited the report to just the auto gather stats jobs (auto_only => true) as SQL doesn’t a boolean type, so you would have to write a little PL/SQL wrapper to capture just those details. Here’s a sample of the (rather wide) output:


select
        dbms_stats.report_single_stats_operation(25809) text_line
from dual
;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation Id | Operation             | Target                             | Start Time          | End Time            | Status      | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25811        | purge_stats           |                                    | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 0           | 0                | 0            | 0            |
|              |                       |                                    | 01.47.37.764146 PM  | 01.47.38.405437 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25810        | purge_stats           |                                    | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 0           | 0                | 0            | 0            |
|              |                       |                                    | 01.47.35.827284 PM  | 01.47.37.763926 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809        | gather_database_stats | AUTO                               | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 285         | 282              | 3            | 0            |
|              | (auto)                |                                    | 01.46.31.672033 PM  | 01.47.35.826873 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25807        | gather_table_stats    | TEST_USER.T1                       | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 1           | 1                | 0            | 0            |
|              |                       |                                    | 12.59.57.704111 PM  | 12.59.57.822695 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |

etc.

You’ll notice in this little sample that operation 25809 is an (auto) gather_database_stats operation which ran 285 tasks, failing on 3 and succeeding on 282 – so lets run the “single stats operation” report to find out more.


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation Id | Operation                    | Target | Start Time                      | End Time                        | Status    | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809        | gather_database_stats (auto) | AUTO   | 08-SEP-18 01.46.31.672033 PM    | 08-SEP-18 01.47.35.826873 PM    | COMPLETED | 285         | 282              | 3            | 0            |
|              |                              |        | +01:00                          | +01:00                          |           |             |                  |              |              |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                                                                                                                                     |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|                                                                                              T A S K S                                                                                              |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | Target                                                         | Type            | Start Time                          | End Time                            | Status                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$                                                | TABLE           | 08-SEP-18 01.46.50.719791 PM +01:00 | 08-SEP-18 01.46.51.882418 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$_OBJ                                            | INDEX           | 08-SEP-18 01.46.51.273134 PM +01:00 | 08-SEP-18 01.46.51.773297 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$_TS                                             | INDEX           | 08-SEP-18 01.46.51.777032 PM +01:00 | 08-SEP-18 01.46.51.787730 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
...
...
...
|    | SYS.WRH$_SEG_STAT_PK.WRH$_SEG_ST_3089296639_5150               | INDEX PARTITION | 08-SEP-18 01.47.35.409615 PM +01:00 | 08-SEP-18 01.47.35.483637 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$LOGMNR_CONTENTS                                          | TABLE           | 08-SEP-18 01.47.35.520504 PM +01:00 | 08-SEP-18 01.47.35.696953 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$LOGMNR_REGION                                            | TABLE           | 08-SEP-18 01.47.35.699253 PM +01:00 | 08-SEP-18 01.47.35.722545 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$DRC                                                      | TABLE           | 08-SEP-18 01.47.35.725003 PM +01:00 | 08-SEP-18 01.47.35.801384 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|                                                                                                                                                                                                     |
|                                                                                                                                                                                                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I’ve trimmed out most of the 285 entries, of course, showing that the last three in the list failed; but with no indication why they failed. Fortunately we could have called the report with “detail_level => ‘ALL'” – so let’s see what that gives us:

select
        dbms_stats.report_single_stats_operation(
                opid         => 25809,
                detail_level => 'ALL'
        ) text_line
from dual
;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation | Operation             | Target | Start Time      | End Time        | Status    | Total    | Successful | Failed   | Active   | Job Name | Session  | Additional Info             |
| Id        |                       |        |                 |                 |           | Tasks    | Tasks      | Tasks    | Tasks    |          | Id       |                             |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809     | gather_database_stats | AUTO   | 08-SEP-18       | 08-SEP-18       | COMPLETED | 285      | 282        | 3        | 0        |          | 250      | Parameters: [block_sample:  |
|           | (auto)                |        | 01.46.31.672033 | 01.47.35.826873 |           |          |            |          |          |          |          | FALSE] [cascade: NULL]      |
|           |                       |        | PM +01:00       | PM +01:00       |           |          |            |          |          |          |          | [concurrent: FALSE]         |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [degree:                    |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_DEGREE_VALUE]       |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [estimate_percent:          |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_ESTIMATE_PERCENT]   |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [granularity:               |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_GRANULARITY]        |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [method_opt:                |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_METHOD_OPT]         |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [no_invalidate:             |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DBMS_STATS.AUTO_INVALIDATE] |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [reporting_mode: FALSE]     |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [stattype: DATA]            |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                                                                                                                                     |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|                                                                                              T A S K S                                                                                              |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | Target       | Type         | Start Time   | End Time     | Status    | Rank  | Job Name | Estimated    | Batching     | Histogram    | Extended     | Reason Code  | Additional   |        |
|       |              |              |              |              |           |       |          | Cost         | Info         | Columns      | Stats        |              | Info         |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | SYS.RECYCLEB | TABLE        | 08-SEP-18 01 | 08-SEP-18 01 | COMPLETED | 1     |          | N/A          | N/A          |              |              | stale stats  |              |        |
|       | IN$          |              | .46.50.71979 | .46.51.88241 |           |       |          |              |              |              |              |              |              |        |
|       |              |              | 1 PM +01:00  | 8 PM +01:00  |           |       |          |              |              |              |              |              |              |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
...
...
...
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | SYS.X$DRC    | TABLE        | 08-SEP-18 01 | 08-SEP-18 01 | FAILED    | 151   |          | N/A          | N/A          |              |              | no stats     | ORA-20000:   |        |
|       |              |              | .47.35.72500 | .47.35.80138 |           |       |          |              |              |              |              |              | Unable to    |        |
|       |              |              | 3 PM +01:00  | 4 PM +01:00  |           |       |          |              |              |              |              |              | analyze      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | TABLE "SYS". |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | "X$DRC", log |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | miner or     |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | data guard   |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | must be      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | started      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | before       |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | analyzing    |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | this fixed   |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | table"       |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|                                                                                                                                                                                                     |
|                                                                                                                                                                                                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------




So we can now see that stats collection failed on the one object I’ve left in the extract because it’s an X$ object that only exists when LogMiner is running. You’ll notice that the we also get some information about things like input parameters to calls and reasons why objects were selected (“stale stats” in the first item in this list).

It’s a great convenience – but it’s always possible to grumble: I’d rather like to see the elapsed time for each operation, or even a filter to limit the report to any operation that took more than X seconds. However, if I want to do a quick check on a client site I’d rather not have to type in the code to query the base tables by hand.

Subquery Order

Wed, 2018-09-05 07:09

From time to time I’ve wanted to optimize a query by forcing Oracle to execute existence (or non-existence) subqueries in the correct order because I know which subquery will eliminate most data most efficiently, and it’s always a good idea to look for ways to eliminate early. I’ve only just discovered (which doing some tests on 18c) that Oracle 12.2.0.1 introduced the /*+ order_subq() */ hint that seems to be engineered to do exactly that.

Here’s a very simple (and completely artificial) demonstration of use.


rem
rem     Script:         122_order_subq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2018
rem

create table t1 as select * from all_objects;
create table t2 as select * from all_objects;
create table t3 as select * from all_objects;

create index t2_i1 on t2(object_id);
create index t3_i1 on t3(object_id);

prompt  =============================
prompt  order_subq(@main subq2 subq3)
prompt  =============================

explain plan for
select
        /*+
                qb_name(main)
                no_unnest(@subq2)
                no_unnest(@subq3)
                order_subq(@main subq2 subq3)
        */
        t1.object_name, t1.object_type
from
        t1
where
        exists (
                select
                        /*+ qb_name(subq2) */
                        null
                from    t2
                where   t2.object_id = t1.object_id * 5
        )
and     exists (
                select
                        /*+ qb_name(subq3) */
                        null
                from    t3
                where   t3.object_id = t1.object_id * 13
        )
;

select * from table(dbms_xplan.display(null,null,'outline'));

=============================
order_subq(@main subq2 subq3)
=============================

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2585036931

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     1 |    53 | 51090   (1)| 00:00:02 |
|*  1 |  FILTER            |       |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1    | 61765 |  3196K|   163   (4)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN | T2_I1 |     1 |     5 |     1   (0)| 00:00:01 |
|*  4 |   INDEX RANGE SCAN | T3_I1 |     1 |     5 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
...
      ORDER_SUBQ(@"MAIN" "SUBQ2" "SUBQ3")
...
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT /*+ NO_UNNEST QB_NAME ("SUBQ2") */ 0 FROM
              "T2" "T2" WHERE "T2"."OBJECT_ID"=:B1*5) AND  EXISTS (SELECT /*+
              NO_UNNEST QB_NAME ("SUBQ3") */ 0 FROM "T3" "T3" WHERE
              "T3"."OBJECT_ID"=:B2*13))
   3 - access("T2"."OBJECT_ID"=:B1*5)
   4 - access("T3"."OBJECT_ID"=:B1*13)

I’ve blocked subquery unnesting for the purposes of the demo and given a query block name to the two subqueries (using a name that identifies the associated table). As you can see, the execution plan uses the subqueries as filter subqueries, operating them in the order I’ve specified in my hint. You can also see that the hint is echoed down into the Outline section of the plan.

It’s possible that this is the plan that the optimizer would have chosen without the order_subq hint, so I ought to see if I can also use the hint to make the subqueries filter in the oppostie order. I happen to know that executing the subquery against t3 is likely to eliminate more rows that executing the subquery against t2. (The “* 13” compared to the “* 5” is significant) so I really want the subqueries to be used in the opposite order anyway – so here’s what happens when I reverse the order in the hint:


prompt  =============================
prompt  order_subq(@main subq3 subq2)
prompt  =============================

explain plan for
select
        /*+
                qb_name(main)
                no_unnest(@subq2)
                no_unnest(@subq3)
                order_subq(@main subq3 subq2)
        */
        t1.object_name, t1.object_type
from
        t1
where
        exists (
                select
                        /*+ qb_name(subq2) */
                        null
                from    t2
                where   t2.object_id = t1.object_id * 5
        )
and     exists (
                select
                        /*+ qb_name(subq3) */
                        null
                from    t3
                where   t3.object_id = t1.object_id * 13
        )
;

select * from table(dbms_xplan.display(null,null,'outline'));

=============================
order_subq(@main subq2 subq3)
=============================

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3585049451

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     1 |    53 | 51090   (1)| 00:00:02 |
|*  1 |  FILTER            |       |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1    | 61765 |  3196K|   163   (4)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN | T3_I1 |     1 |     5 |     1   (0)| 00:00:01 |
|*  4 |   INDEX RANGE SCAN | T2_I1 |     1 |     5 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
...
      ORDER_SUBQ(@"MAIN" "SUBQ3" "SUBQ2")
...
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT /*+ NO_UNNEST QB_NAME ("SUBQ3") */ 0 FROM
              "T3" "T3" WHERE "T3"."OBJECT_ID"=:B1*13) AND  EXISTS (SELECT /*+
              NO_UNNEST QB_NAME ("SUBQ2") */ 0 FROM "T2" "T2" WHERE
              "T2"."OBJECT_ID"=:B2*5))
   3 - access("T3"."OBJECT_ID"=:B1*13)
   4 - access("T2"."OBJECT_ID"=:B1*5)

With the modified hint in place the order of the filter subqueries is reversed. Notice how the Predicate section also echoes the ordering of the subqueries.

Footnote

It should be noted that the order_subq() hint doesn’t get mentioned in the 18c SQL Language Reference “Alphabetical List of Hints”. If it were then one of the little oddities that might get a mention is that the optimizer seems to ignore the hint if you disable CPU costing. (not that anyone should be doing that since 10g).

RTFM ?

Wed, 2018-08-29 05:04

My entrance at the Polish Oracle User Group conference 2017 has just resurfaced on Twitter. There is a back-story to this which contains an allegorical lesson in using Oracle. As I said in the opening remarks in the subsequent presentation: “The lesson began before I got to the stage”.

Like all the other speakers at POUG2017 I had received an email asking me to name a tune for a playlist. Having decided that Beethoven, Brahms and Tchaikowski were not the sort of composers the organisers had in mind I nominated (with a touch of irony) “Who wants to live forever?” by Queen (despite Richard Foote’s strenuous efforts to turn the Oracle world to David Bowie).

When the conference started I noticed two things: first, that the tunes for the “playlist” were actually being used to accompany speakers to the stage, secondly that the admin staff were all wearing monk-like gowns and hoods. So I asked for my tune to be changed to “The Imperial March” (Darth Vader’s theme) and borrowed the gown from the tallest admin person, with the results you see in the video clip.

So what’s the Oracle allegory ?

First you read the manuals, then you observe how it really works before you go live.

 

Error Logging

Fri, 2018-08-24 05:19

Error logging is a topic that I’ve mentioned a couple of times in the past, most recently as a follow-up in a discussion of the choices for copying a large volume of data from one table to another, but originally in an addendum about a little surprise you may get when you use extended strings (max_string_size = EXTENDED).

If you use the default call to dbms_errlog.create_error_log() to create an error logging table then Oracle will create a table with a few columns of its own plus every column (name) that you have in your original table – but it will create your columns as varchar2(4000), or nvarchar2(2000), or raw(2000) – unless you’ve set the max_string_size to extended.  Here’s a simple  demo script with results from two different systems, one with the default setting the other with the extended setting (note, there’s a little inconsistency in handling raw() columns.


rem
rem     Script:         log_errors_min.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:
rem

create table t1 (
        v1      varchar2(10),
        n1      number(2,0),
        d1      date,
        nv1     nvarchar2(10),
        r1      raw(10)
);


execute dbms_errlog.create_error_log('t1')

desc err$_t1


max_string_size = STANDARD
--------------------------
 Name			       Null?	Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$			NUMBER
 ORA_ERR_MESG$				VARCHAR2(2000)
 ORA_ERR_ROWID$ 			ROWID
 ORA_ERR_OPTYP$ 			VARCHAR2(2)
 ORA_ERR_TAG$				VARCHAR2(2000)
 V1					VARCHAR2(4000)
 N1					VARCHAR2(4000)
 D1					VARCHAR2(4000)
 NV1					NVARCHAR2(2000)
 R1					RAW(2000)


max_string_size = EXTENDED
--------------------------
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$                        NUMBER
 ORA_ERR_MESG$                          VARCHAR2(2000)
 ORA_ERR_ROWID$                         ROWID
 ORA_ERR_OPTYP$                         VARCHAR2(2)
 ORA_ERR_TAG$                           VARCHAR2(2000)
 V1                                     VARCHAR2(32767)
 N1                                     VARCHAR2(32767)
 D1                                     VARCHAR2(32767)
 NV1                                    NVARCHAR2(16383)
 R1                                     RAW(32767)

Every single “original” column that appears in this table will be a LOB, with an inline LOB locator of 30 or more bytes. (At least, that’s the 12.1.0.2 implementation, I haven’t checked for 12.2 or 18.3).

If this is going to be a problem (e.g. you have a table defined with 500 columns but only use 120 of them) you can create a minimalist error logging table. Provided you create it with the ora_err% columns suitably defined you can add only those columns you’re really interested in (or feel threatened by), and you don’t have to declare them at extreme lengths. e.g.


create table err$_special (
        ora_err_number$         number,
        ora_err_mesg$           varchar2(2000),
        ora_err_rowid$          rowid,
        ora_err_optyp$          varchar2(2),
        ora_err_tag$            varchar2(2000),
        n1                      varchar2(128)
)
;

insert into t1 values(1,'abc','02-jan-1984',sys_op_c2c('abc'),hextoraw('0xFF')) 
log errors into err$_special
reject limit unlimited
;

execute print_table('select * from err$_special')


ORA_ERR_NUMBER$               : 1722
ORA_ERR_MESG$                 : ORA-01722: invalid number

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
N1                            : abc


If you try to create an error logging table that doesn’t include the 5 critical columns you’ll see Oracle error ORA-38900: missing mandatory column “ORA_ERR_{something}” of error log table “{your logging table name}” when you try to log errors into it, and the 5 critical columns have to be the first 5 columns (in any order) in the table or you’ll get Oracle error ORA-38901: column “ORA_ERR_{something}$” of table “{your logging table name}” when you try to log errors into it.

Descending bug

Wed, 2018-08-22 07:20

Following on from Monday’s posting about reading execution plans and related information, I noticed a question on the ODC database forum asking about the difference between “in ({list of values})” and a list of “column = {constant}” predicates connected by OR. The answer to the question is that there’s essentially no difference as you would be able to see from the predicate section of an execution plan:


SELECT  c1, c2, c3, c4, c5, c6, c7, c8..  
FROM    TAB1  
WHERE   STS IN ( 'A', 'B')  
AND     cnt < '4'  
AND     dt < sysdate  
and     rownum <=1;  
  
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 33399   (1)| 00:03:14 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   114K|   201M| 33399   (1)| 00:03:14 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   114K|       |   723   (1)| 00:00:05 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"<SYSDATE@!)  
   4 - access("STS"='A' OR "STS"='B')  

Note how the predicate section tells you that the original “sts in ( ‘A’, ‘B’ )” has been transformed into “sts = ‘A’ or sts = ‘B'”.

A further point I made about IN-lists in Monday’s post was that as one step in the transformation Oracle would sort the list and eliminate duplicates, and it suddenly occurred to me to wonder whether Oracle would sort the list in descending order if the only relevant index were defined to start with a descending column. Naturally I had to try it so here’s a suitable script to prepare some data:

rem
rem     Script:         descending_bug_06.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2018
rem     Purpose:
rem
rem     Last tested
rem             18.3.0.0        Crashes
rem             12.2.0.1        Crashes
rem             12.1.0.2        Crashes
rem             11.2.0.4        Bad Plan
rem

create table t1
nologging
pctfree 95 pctused 5
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        case mod(rownum,1000)
                when 0 then 'A'
                when 3 then 'B'
                when 6 then 'C'
                       else 'D'
        end                             sts,
        case mod(rownum,1000)
                when 0 then '1'
                when 3 then '2'
                when 6 then '3'
                       else '4'
        end                             cnt,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

create index t1_i1a on t1(sts) nologging;
create index t1_i1d on t1(sts desc) nologging;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size skewonly'
        );
end;
/

There is one oddity in this script – if you’ve got every column in an index declared as DESC you’ve made a mistake and none of the columns should be declared as DESC. The feature is relevant only if you want a mixture of ascending and descending column in a single index.

An important detail of the script is that I’ve gathered stats AFTER creating the objects – it’s important to do this, even in 18.3, because (a) creating the “descending” index will result in a hidden virtual column being created to represent the descending column and I want make sure I have stats on that column and (b) the “stats on creation” code doesn’t generate histograms and I want a (frequency) histogram on columns sts and the hidden, virtual, descending version of the column.

After generating the data and checking that I have the correct histograms for sts and sys_nc00006$ (the relevant hidden column) I can then run the following test:

set serveroutput off
alter session set statistics_level = all;

alter index t1_i1d invisible;

select  sts, count(*)
from    t1
where   sts in ('B','C')
group by
        sts
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter index t1_i1d   visible;
alter index t1_i1a invisible;

select  sts, count(*)
from    t1
where   sts in ('B','C')
group by
        sts
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;
set serveroutput on


The code makes one index invisible then runs a query that should use an inlist iterator; then it switches indexes making the invisible one visible and vice versa and repeats the query. I’ve enabled rowsource execution statistics and pulled the execution plans from memory to make sure I don’t get fooled by any odd glitches that might exist within “explain plan”. Here are the results from 11.2.0.4 – normal index, then descending index – with a little cosmetic cleaning:


S   COUNT(*)
- ----------
B        100
C        100

SQL_ID  f20u42pmw1z6w, child number 0
-------------------------------------
select sts, count(*) from t1 where sts in ('B','C') group by  sts

-----------------------------------------------------------------------------------------
| Id  | Operation            | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |      1 |        |      2 |00:00:00.01 |       5 |
|   1 |  SORT GROUP BY NOSORT|        |      1 |      2 |      2 |00:00:00.01 |       5 |
|   2 |   INLIST ITERATOR    |        |      1 |        |    200 |00:00:00.01 |       5 |
|*  3 |    INDEX RANGE SCAN  | T1_I1A |      2 |    178 |    200 |00:00:00.01 |       5 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access(("STS"='B' OR "STS"='C'))


Index altered.
Index altered.


S   COUNT(*)
- ----------
C        100
B        100


SQL_ID  f20u42pmw1z6w, child number 0
-------------------------------------
select sts, count(*) from t1 where sts in ('B','C') group by  sts

-----------------------------------------------------------------------------------------
| Id  | Operation            | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |      1 |        |      2 |00:00:00.02 |     198 |
|   1 |  SORT GROUP BY NOSORT|        |      1 |      2 |      2 |00:00:00.02 |     198 |
|*  2 |   INDEX FULL SCAN    | T1_I1D |      1 |   1000 |    200 |00:00:00.02 |     198 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((SYS_OP_UNDESCEND("T1"."SYS_NC00006$")='B' OR
              SYS_OP_UNDESCEND("T1"."SYS_NC00006$")='C'))


As expected we see counts of 100 for ‘B’s and ‘C’s, and we also see that the “sort group by nosort” operation with the descending index has produced the results in reverse order. The problem though is that the optimizer has decided to use an “index full scan” on the descending index, and the estimate of the rows returned is terribly wrong (and seems to be the common “5% guess”, used once for each target value), and the number of buffer visits is huge compared to the result from the normal index – Oracle really did walk every leaf block in the index to get this result. The predicate section also looks rather silly – why hasn’t the optimizer produced predicates more like: “sys_nc00006$ = sys_op_descend(‘B’)” ?

In passing you’ll notice that the estimated rows in the plan using the normal index is a little low. This is the result of Oracle using a small sample (ca. 5,500 rows) in 11g to gather histogram stats. 12c will do better for a frequency histogram with the fast algorithm it uses for a 100% (auto) sample size.

So 11g doesn’t do very well but we’ve got 12.1.0.2, 12.2.0.1, and (in the last couple of weeks) 18.3 to play with. Here’s the result from 12.1.0.2 and 12.2.0.1 for the query that should use the descending index:


select  sts, count(*)
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [qernsRowP], [1], [], [], [], [], [], [], [], [], [], []


SQL_ID  f20u42pmw1z6w, child number 0
-------------------------------------
select sts, count(*) from t1 where sts in ('B','C') group by  sts

-----------------------------------------------------------------------------------------
| Id  | Operation            | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |      1 |        |      0 |00:00:00.01 |       0 |
|   1 |  SORT GROUP BY NOSORT|        |      1 |      2 |      0 |00:00:00.01 |       0 |
|   2 |   INLIST ITERATOR    |        |      1 |        |    101 |00:00:00.03 |       5 |
|*  3 |    INDEX RANGE SCAN  | T1_I1D |      2 |    200 |    101 |00:00:00.01 |       5 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access(("T1"."SYS_NC00006$"=SYS_OP_DESCEND('B') OR
              "T1"."SYS_NC00006$"=SYS_OP_DESCEND('C')))
       filter((SYS_OP_UNDESCEND("T1"."SYS_NC00006$")='B' OR
              SYS_OP_UNDESCEND("T1"."SYS_NC00006$")='C'))

The query crashed! The plan, however, did look appropriate – the optimizer picked an inlist iterator, picked an index range scan, got the correct estimate of rows (index entries), and did better with the predicate section (though having used a sensible predicate for the access predciate it then used the bizarre version as the filter predicate). Judging from the A-rows column the query seems to have crashed at roughly the point where the optimizer was switching from the range scan for the first iteration into the range scan for the second iteration.

And then there’s Oracle 18.3 – which does the same as the 12c versions :(

To make sure that my silly “single column so it shouldn’t be declared descending” index was the sole cause of the problem I repeated the tests using a two-column index on (sts, cnt).

Conclusion:

Descending indexes or (to be more accurate) indexes with descending columns can still produce problems even in the very latest version of Oracle.

Footnote

Oracle MoS has the wonderful “ORA-600/ORA-7445/ORA-700 Error Look-up Tool (Doc ID 153788.1)” (which doesn’t yet allow you to choose 18.3 as a version) so I used this to do a look up for ORA-00600 errors with first paremeter qernsRowP in 12.2.0.1 and got the following suggestion from doc ID 285913.1: “set event:10119 to disable no-sort fetch and then reparse the failing SQL.” The example suggested setting the event to level 12, and this solved the problem for all three failing versions – but the suggestion came with a warning: “Setting this event at system level may impact the performance of database.” The execution plan (taken, in this case, from 18.2) may explain the warning:

 

S   COUNT(*)
- ----------
B	 100
C	 100

SQL_ID	f20u42pmw1z6w, child number 0
-------------------------------------
select sts, count(*) from t1 where sts in ('B','C') group by  sts

------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	    |	   1 |	      |      2 |00:00:00.01 |	    4 |       |       | 	 |
|   1 |  HASH GROUP BY	   |	    |	   1 |	    2 |      2 |00:00:00.01 |	    4 |  1558K|  1558K|  659K (0)|
|   2 |   INLIST ITERATOR  |	    |	   1 |	      |    200 |00:00:00.01 |	    4 |       |       | 	 |
|*  3 |    INDEX RANGE SCAN| T1_I1D |	   2 |	  200 |    200 |00:00:00.01 |	    4 |       |       | 	 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access(("T1"."SYS_NC00006$"=SYS_OP_DESCEND('B') OR "T1"."SYS_NC00006$"=SYS_OP_DESCEND('C')))
       filter((SYS_OP_UNDESCEND("T1"."SYS_NC00006$")='B' OR SYS_OP_UNDESCEND("T1"."SYS_NC00006$")='C'))


The plan has changed from using a “sort group by nosort” – which effectively means just keeping a running count as you go – to a real “hash group by” which means you have to do the hashing arithmetic for every value (though maybe there’s a deterministic trick that means Oracle won’t do the arithmetic if the next value to be hashed is the same as the previous value) and the actual memory used (659K) does seem a little extreme for counting two distinct values.

 

Masterclass – 1

Mon, 2018-08-20 04:42

A recent thread on the Oracle developer community database forum raised a fairly typical question with a little twist. The basic question is “why is this (very simple) query slow on one system when it’s much faster on another?” The little twist was that the original posting told use that “Streams Replication” was in place to replicate the data between the two systems.

To make life easy for remote trouble-shooters the poster had supplied (for each system) the output from SQL Monitor when running the query, the autotrace output (which shows the predicate section that SQL Monitor doesn’t report), and the session statistics for the query run, plus some statistics about the single table in the query, the index used in the plan, and the column on which that index was based.

Here, with a little cosmetic editing (and a query that has clearly been camouflaged by the OP), is the information supplied for the faster database, where the query took about 30 seconds to complete.


SELECT c1, c2, c3, c4, c5, c6, c7, c8..  
FROM TAB1  
WHERE STS IN ( 'A', 'B')  
AND cnt < '4'  
AND dt < sysdate  
and rownum <=1;  
  
Sql_monitor and stats from DB1  
******************************  
  
Global Information  
------------------------------  
 STS              :  DONE (ALL ROWS)             
 Instance ID         :  1                           
 Execution Started   :  08/17/2018 08:31:22         
 First Refresh Time  :  08/17/2018 08:31:22         
 Last Refresh Time   :  08/17/2018 08:31:53         
 Duration            :  31s                         
 Program             :  sqlplus.exe                 
 Fetch Calls         :  1                           
  
Global Stats  
===============================================================================  
| Elapsed |   Cpu   |    IO    | Concurrency | Fetch | Buffer | Read  | Read  |  
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Calls |  Gets  | Reqs  | Bytes |  
===============================================================================  
|      33 |    3.00 |       30 |        0.08 |     1 |   102K | 38571 | 301MB |  
===============================================================================  
  
SQL Plan Monitoring Details (Plan Hash Value=715774357)  
======================================================================================================================================================================================  
| Id |            Operation            |          Name           |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Activity |       Activity Detail        |  
|    |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes |   (%)    |         (# samples)          |  
======================================================================================================================================================================================  
|  0 | SELECT STATEMENT                |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  1 |   COUNT STOPKEY                 |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  2 |    INLIST ITERATOR              |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  3 |     TABLE ACCESS BY INDEX ROWID | TAB1                    |    114K | 33399 |        32 |     +0 |     2 |        1 | 38377 | 300MB |    96.77 | Cpu (1)                      |  
|    |                                 |                         |         |       |           |        |       |          |       |       |          | db file sequential read (16) |  
|    |                                 |                         |         |       |           |        |       |          |       |       |          | read by other session (13)   |  
|  4 |      INDEX RANGE SCAN           | TAB1_STS_IDX            |    115K |   723 |        30 |     +2 |     2 |     118K |   194 |   2MB |     3.23 | read by other session (1)    |  
======================================================================================================================================================================================  
  
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 33399   (1)| 00:03:14 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   114K|   201M| 33399   (1)| 00:03:14 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   114K|       |   723   (1)| 00:00:05 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"<SYSDATE@!)  
   4 - access("STS"='A' OR "STS"='B')  
  
  
Table stats 
-----------
table_name    num_rows        blocks  empty_blocks  chain_cnt  avg_row_len             
TAB1        79,654,925    22,416,917             0          0        1,847          
  
column_stats(STS)
------------------
table_name  column_name  num_distinct  num_nulls    density  avg_col_len   
TAB1        STS                     5          0  6.2049E-9            2         
  
Index_stats(on STS)
-------------------
index_name    leaf_blocks  distinct_keys  avg_leaf_blocks_per_key  avg_data_blocks_per_key  clustering_factor    num_rows
TAB1_STS_IDX      487,939              5                   97,587                4,458,874         22,294,372  78,308,939   
 
Session stats
-------------
process last non-idle time              1,534,508,966
session connect time                    1,534,508,966
logical read bytes from cache             839,663,616
cell physical IO interconnect bytes       316,055,552
physical read bytes                       316,055,552
physical read total bytes                 316,055,552
file io wait time                          17,044,083
session pga memory                          8,643,880
session pga memory max                      8,643,880
temp space allocated (bytes)                4,194,304
session uga memory                          1,755,696
session uga memory max                      1,755,696
buffer is pinned count                        135,743
table fetch by rowid                          117,519
non-idle wait count                           107,301
session logical reads                         102,500
consistent gets                               102,450
consistent gets from cache                    102,448
no work - consistent read gets                102,368
buffer is not pinned count                    101,741
free buffer inspected                          43,458
free buffer requested                          38,592
physical read total IO requests                38,581
physical read IO requests                      38,581
physical reads                                 38,581
physical reads cache                           38,579
hot buffers moved to head of LRU               37,258
bytes sent via SQL*Net to client                7,370
bytes received via SQL*Net from client          6,869
redo size                                       5,536
undo change vector size                         4,432
DB time                                         3,166
non-idle wait time                              2,962
user I/O wait time                              2,954
table fetch continued row                       2,423


And here’s the equivalent information from the slower database where the query took more than 9 times as long (4 minutes 42 seconds) to complete.


Global Information  
------------------------------  
 STS              :  DONE (ALL ROWS)           
 Instance ID         :  1                         
 Execution Started   :  08/17/2018 08:21:47       
 First Refresh Time  :  08/17/2018 08:21:47       
 Last Refresh Time   :  08/17/2018 08:26:29       
 Duration            :  282s                      
 Module/Action       :  SQL*Plus/-                
 Program             :  sqlplus.exe               
 Fetch Calls         :  1                         
  
Global Stats  
================================================================  
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |  
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |  
================================================================  
|     287 |    8.76 |      278 |     1 |   110K | 110K | 858MB |  
================================================================  
  
SQL Plan Monitoring Details (Plan Hash Value=715774357)  
======================================================================================================================================================================================  
| Id |            Operation            |          Name           |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |        Activity Detail        |  
|    |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |          (# samples)          |  
======================================================================================================================================================================================  
|  0 | SELECT STATEMENT                |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  1 |   COUNT STOPKEY                 |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  2 |    INLIST ITERATOR              |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  3 |     TABLE ACCESS BY INDEX ROWID | TAB1                    |    142K | 40211 |       282 |     +1 |     2 |        1 | 109K | 854MB |   100.00 | db file sequential read (277) |  
|  4 |      INDEX RANGE SCAN           | TAB1_STS_IDX            |    142K |   892 |       280 |     +3 |     2 |     118K |  491 |   4MB |          |                               |  
======================================================================================================================================================================================  
  
Execution Plan (autotrace) 
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 40211   (1)| 00:08:03 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   141K|   249M| 40211   (1)| 00:08:03 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   141K|       |   892   (1)| 00:00:11 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"<SYSDATE@!)  
   4 - access("STS"='A' OR "STS"='B')  
            
Table stats
-----------
table_name    num_rows        blocks  empty_blocks  chain_cnt  avg_row_len             
TAB1        79,447,350   22,318,667            710    537,597        1,847  
  
column_stats(STS)
-----------------
table_name  column_name  num_distinct  num_nulls    density  avg_col_len  
TAB1        STS                     5          0  6.1789E-9            2  
  
Index_stats(on STS)
-------------------
index_name    leaf_blocks  distinct_keys  avg_leaf_blocks_per_key  avg_data_blocks_per_key  clustering_factor    num_rows
TAB1_STS_IDX      493,152              5                   98,630                4,382,625         21,913,127  79,106,263   


Session Stats
-------------
process last non-idle time              1,534,508,200
session connect time                    1,534,508,200
logical read bytes from cache             903,790,592
physical read total bytes                 899,629,056
cell physical IO interconnect bytes       899,629,056
physical read bytes                       899,629,056
file io wait time                         277,881,742
session pga memory                          8,586,744
session pga memory max                      8,586,744
temp space allocated (bytes)                4,194,304
session uga memory max                      1,690,184
session uga memory                          1,690,184
buffer is pinned count                        129,148
table fetch by rowid                          117,521
session logical reads                         110,326
consistent gets                               110,276
consistent gets from cache                    110,276
no work - consistent read gets                110,071
non-idle wait count                           109,879
free buffer requested                         109,830
physical read IO requests                     109,818
physical reads cache                          109,818
physical reads                                109,818
physical read total IO requests               109,818
buffer is not pinned count                    109,577
free buffer inspected                          70,740
hot buffers moved to head of LRU               31,910
DB time                                        28,203
non-idle wait time                             27,788
user I/O wait time                             27,788
dirty buffers inspected                        19,067
bytes sent via SQL*Net to client               14,927
bytes received via SQL*Net from client         10,607
redo size                                       5,440
undo change vector size                         4,432
table fetch continued row                       3,660

There are all sorts of noteworthy details in these two sets of information – some of the “how to see what’s in front of you” type, some of the “be careful, Oracle can deceive you” type. So I’m going to walk though the output picking up a number of background thoughts before commenting on the answer to the basic question.

We’ll start with the object statistics, then we’ll look at the SQL Monitor plan to see if we can determine where the extra time was spent, then we’ll try to work out what else the plan might be telling us about the code and data, then we’ll summarise my observations to make a claim about the difference in behaviour.

Object statistics

The table has 79M rows with average length of 1,847 bytes, using 22M blocks. With an 8KB block size and that average row size we would expect to see about 3 rows per block, and that’s fairly consistent with the value of rows / blocks.  We don’t know what the sample size was for this stats collection, but it might have been a “small” sample size rather than the the 100% you would get from using auto_sample_size, so that might also explain some discrepancy between the two different views on the figures.

We note that the secondary system reports a chain_cnt in excess of 500,000 rows. The only (unhacked) way that this figure could be set would be through a call to analyze statistics, and once the figure is there it won’t go away unless you use the analyze command again to delete statistics.  We don’t know the history of how and when the figure got there so it doesn’t guarantee that there are any chained or migrated rows, nor does the zero in the table stats on the primary system guarantee that it doesn’t have any chained or migrated rows – all it tells us is that at some time someone used the wrong command to gather stats and there were some (less than 1%) migrated or chained rows in the table at the time. (The optimizer will use this figure in its arithmetic if it is set, by the way, so it may affect some of the cost calculations – but not by a huge amount.)

The column sts reports 5 distinct values, no nulls, and a density of 6.2e-9 which is roughly half of 1/79M: so we have a frequency histogram on the column (in the absence of a histogram the density would be 1/5, and it’s reasonable to assume that the number of buckets was either the default or set to something larger than 5).  We were told that the system was running 11.2.0.4 – so we have to be a little suspicious about the accuracy of this histogram since it will have been sampled with a very small sample if the stats collection had used auto_sample_size. (12c will use a specially optimized 100% sample for frequency and top-N histograms when using auto_sample_size)

The index on sts has a clustering_factor of around 22M which is similar to the number of blocks in the table – and that’s not too surprising if there are are only a very small number of distinct values in the column – especially when the presence of the histogram suggest that there’s a skew in the data distribution. (There’s more to come on that point.) The number of leaf blocks is about 500,000 (being lazy about arithmetic) – just as a side note this suggests the index is running in a fairly inefficient state (and probably hasn’t been created with the compress keyword).

Doing a rough estimate of the index arithmetic :  the avg_col_len for sts is 2, so the space required for each index entry will be 13 bytes (2 for the column, 7 for the rowid content, 2 for the row header, 2 for the row directory entry).  Take off the block overhead, and assume the index is running at a “typical” 70% space usage per leaf block and you might expect 5,600 bytes used per leaf block for current index data and that works out to about 430 index entries per leaf block.  With 79M rows in the table that should lead to 79M/430 leaf blocks – i.e. roughly 184,000 leaf blocks, not 493,000 leaf blocks.  However it’s not unusual to see an index with extremely repetitive values operating at something like 50% utilisation, which would bring our estimate to about 310 rows per leaf block and 255,000 leaf blocks – which is still off by a factor of nearly 2 compared to what we’ve actually got. Again, of course, we have to be a little bit cautious about these statistics – we don’t know the sample size, and Oracle uses a surprisingly small number of blocks to sample the stats for an index.

Where’s the time.

The SQL Monitor gives us a very clear report of where most of the time went – almost all of it was spent in I/O waits, and almost all of the wait time was in the “table access by index rowid” opration in both cases; but the primary system did 38,377 read requests while the secondary did 109,000 read requests in that line of the plan. It is significant, though, that quite a lot (40%) of the ASH samples for that operation on the primary system were for “read by other session” rather than “db file sequential read”:  in other words some other session(s) were doing a lot of work to pull the data we wanted into the buffer cache at the same time. Apart from the fact that a wait for “read by other session” often means we spend less time waiting than if we’d had to do the read ourselves, the presence of this wait suggests that other sessions may be “pre-caching” data for us so that we end up having to read far fewer blocks than would otherwise be the case.

It’s important to note at the same time that the difference in Buffer Gets for the two systems was small – 102K vs. 110K – and the “Rows (actual)” was the same in both cases – 118K entries returned by the index range scan.  Both systems did similar amounts of “logical” work, to process similar amounts of data; the difference was the fraction of the work that required a buffer get to turn into a disc read or a “wait for other read”.

We might want to pick up a few more numbers to corroborate the view that the only significant difference was in the volume of data cached and not some more esoteric reason.  Some of the session statistics should help.


DB1:  table fetch by rowid                          117,519
DB2:  table fetch by rowid                          117,521

DB1:  undo change vector size                         4,432
DB2:  undo change vector size                         4,432

DB1:  redo size                                       5,536
DB2:  redo size                                       5,440

DB1:  session logical reads                         102,500
DB2:  session logical reads                         110,326

DB1:  no work - consistent read gets                102,368
DB2:  no work - consistent read gets                110,071

DB1:  table fetch continued row                       2,423
DB2:  table fetch continued row                       3,660

The number of rows fetched by rowid is virtually identical and we have done (virtually) no work that generates undo or redo – such as delayed block cleanout; there are no statistics shown for “%undo record applied” so we probably haven’t done very much work to get a read consistent view of the data though we can’t be sure that the OP simply failed to copy that stat into list supplied (but then the similarity of “session logical reads” to “no work – consistent read gets” confirms the hypothesis that we didn’t do any (significant) work on visiting undo blocks.

We do see a few percent increase in the number of buffer gets (“session logical reads”) – but this may reflect the fact that the actual pattern of data in one table is slightly different from the pattern in the other – thanks to ASSM the process id of the process that inserts a row into a table can affect (within a small range, usually) the block into which the row is inserted; but at 102,000 / 110,000 buffer gets to visit 117,500 rows in the table we can see that there must be some table blocks that hold two (or more) rows that are identified as consecutive in the index – leading to some row visits being achieved through a buffer pin and without a fresh buffer get. You’ll note that this argument is consistent with the small variation in clustering_factor (not that we entirely trust those figures) for the two indexes – the system with the lower clustering_factor for the index has done fewer buffer gets to acquire the same number of rows from the table – by definition that means (assuming default setup) that there are more cases where “the next table row” is in the same block as the current row.

The final figure I’ve shown is the “table fetch continued rows”: according to the table stats (which we don’t necessarily trust completely) 500K out of 79M rows are chained/migrated which is roughly 0.6%. We know that we’re visiting about 117K table rows so might expect (on average) roughly the same percentage migrated/chained viz: 0.6% of 117K = 743, so there’s a little anomaly there (or an error in our assumption about “average” behaviour.  It’s worth noting, though, that a “continued fetch” would have to do an extra buffer visit (and maybe an extra physical read).  You might wonder, of course, how there could be any chained or migrated rows when the average row length is 1,847 bytes but in a follow-up post the OP did say there were 3 BLOB columns in the table, which can cause havoc with interpreting stats for all sorts of reasons. We don’t have any information about the table structure – particularly whether the columns in the query appear before or after the BLOB columns in the table definition – and we don’t know what processing takes place (for example, maybe the 3rd BLOB is only updated after the sts column has been changed to a value other than A or B which would help to explain why we shouldn’t be using the 0.6% calculation above as a table-wide average), so we’re not in a position to say why any of the continued fetches appear but there are several guesses we could make and they’re all easy to check.

Plan observations

If we examine row estimates we see that it 114K for the faster plan and 141K for the slower plan (with a closely corresponding variation in cost). The difference in estimates simply tells us that the histogram gathering was probably a small sample size and subject to a lot of variation. The scale of the estimates tells us that the A and B rows are probably rare – call it 125K out of 79M rows, about 0.16% of the total rows in the table, so it would not be surprising to see consecutive samples for the histogram producing significant variations in estimates.

The other interesting thing we can note in the SQL Monitor plan is that the Starts column for the index range scan / table access operations in both plans shows the value 2: this means that there are no “A” rows that match the other predicates:  Oracle has run the “A” iteration to completion then started the “B” iteration and found a row on the second iteration. Is this a coincidence, or should it always happen, or is it only fairly likely to happen; is it possible to find times when there are no suitable “B” rows but plenty of suitable “A” rows. The final predicate in the query is “rownum <= 1” – so the query is picking one row with no explicit strategy for choosing a specific row when there are multiple appropriate rows, does this mean that we could optimizer the query by rewriting it as a “union all” that searched for B rows first and A rows second ? We just don’t know enough about the processing.

In passing, we can’t get Oracle to search the B rows first by changing the order of the in-list.  If you have a predicate like “where sts in ({list of literal values})” the optimizer will sort the list to eliminate duplicates before rewriting the predicate as a list of disjuncts, and then (if the path uses an iterator) iterate through the list in the resulting order.

In the absence of information about the way in which the data is processed we can only say that we need to avoid visiting the table so frequently. To do this we will need to add one or both of the columns from the other predicates to the index – this might double the size of the index, but eliminate 95% of the potential I/O.  For example if we discover that A and B rows are initially created “into the future” and this query is looking for a row whose “time has come” so that it can be processed and changed to an X row (say) then there may only ever be a tiny number of rows where the “sts = A and the dt < sysdate” and an index on (sts, dt) would be a perfect solution (especially if it were compressed on at least the first column).

The OP has declared a reluctance to add an index to the table – but there are two points to go with this indexing strategy. Since we know there’s a frequency histogram and the A and B rows appear to be rare values – what benefit is there in having an index that covers the other values (unless 2 of the remaining 3 are also rare).  How about creating a function-based index that represents only the rare values and modifying this code to use that index – e.g.

create index t1_id on t1 (
        case sts when 'A' then sts when 'B' then sts end,
        case sts when 'A' then dt  when 'B' then dt  end
) compress 1
;

select  *
from    t1
where   case sts when 'A' then sts when 'B' then sts end in ('A','B')
and     case sts when 'A' then dt  when 'B' then dt  end < sysdate
and     cnt < '4'
and     rownum <= 1
/


You might be able to replace a huge index (79M rows worth) with this small one (120K rows worth) unless there’s too much other code in the system that has to be adjusted or the sts column is actually the target of a referential integrity constraint; at worst you could add this index knowing that it’s generally not going to consume much in the way of extra space or processing resources and is going to save you a lot of work for this query.

Summary

The execution plan from SQL Monitor points very strongly to the fast system benefiting from having a lot of the relevant data cached and constantly being reloaded into the cache by other sessions while the slow system has to acquire almost all of its data by real phyiscal reads. Most of the reads address the table so engineering an index that is low-cost and (fairly) high precision is the most significant strategy for reducing the workload and time on the slow system.

The fact that all the potential A rows fail to match the full predicate set suggests that there MAY be some aspect of the processing that means it would be more efficient to check for B rows before checking for A rows.

Given the massive skew in the data distribution a function-based index that hides all the non-popular values (or even all the values that are not of interest to this query) may be the most cost-effective way of adding a very effective index to the table with minimal resource requirements.

And finally

It’s taken me more than 4 hours to write this note after spending about 10 minutes reading through the information supplied by the OP and identifying and cross-checking details. A small fraction of the 4+ hours was spent creating a little model to check something I had to say about in-lists, the rest of it was trying to write up a coherent description covering all the details.

That’s it for today, but I may have missed a couple of points that I noticed as I read the OP’s posting; and I will want to do a little cosmetic work on this article and check grammar and spelling over the next couple of days.

<h3<Update (already)

Shortly after I posted this blog note the owner of the question reported the following as the distribution of values for the sts column:

 STS   COUNT(*)
---- ----------
   A          6
   E        126
   D        866
   C   80212368
   B     117631

Two things stand out about these figures – first it’s an ideal example of a case where it would be nice avoid having index entries for the 80 million ‘C’ rows. Depending on the coding and testing costs, the supportability of the application and the possible side effects this could be done with a function-based index, or by introducing a virtual column that hides the ‘C’s behing a NULL, or by changing the code to use NULL instead of ‘C’.

Secondly – I made a comment about rewriting the code to query the B’s before the A’s. But we saw that Oracle worked through about 117,000 rows before returning a result: so the fitures above tell us that it must have worked through almost all the B’s and the handful of A’s was just a tiny little blip before it got to the B iteration – so there’s no point in making that change.

My suggestion for the function-based index above could be modified in two ways, of course – add two more “when”s to each “case” to capture the D and E rows, or take the opposite viewpoint and create an index on expressions like: “case sts when ‘C’ then to_char(null) else sts end”. The benefit of the latter approach is that you don’t have to modify the index definition (and rebuild the index) if a new legal value for sts appears.

Extended Histograms – 2

Thu, 2018-08-02 08:13

Following on from the previous posting which raised the idea of faking a frequency histogram for a column group (extended stats), this is just a brief demonstration of how you can do this. It’s really only a minor variation of something I’ve published before, but it shows how you can use a query to generate a set of values for the histogram and it pulls in a detail about how Oracle generates and stores column group values.

We’ll start with the same table as we had before – two columns which hold only the combinations (‘Y’, ‘N’) or (‘N’, ‘Y’) in a very skewed way, with a requirement to ensure that the optimizer provides an estimate of 1 if a user queries for (‘N’,’N’) … and I’m going to go the extra mile and create a histogram that does the same when the query is for the final possible combination of (‘Y’,’Y’).

Here’s the starting code that generates the data, and creates histograms on all the columns (I’ve run this against 12.1.0.2 and 12.2.0.1 so far):


rem
rem     Script:         histogram_hack_2a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2018
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

create table t1
as
select 'Y' c2, 'N' c3 from all_objects where rownum <= 71482 -- > comment to deal with wordpress format issue.
union all
select 'N' c2, 'Y' c3 from all_objects where rownum <= 1994 -- > comment to deal with wordpress format issue.
;

variable v1 varchar2(128)

begin
        :v1 := dbms_stats.create_extended_stats(null,'t1','(c2,c3)');
        dbms_output.put_line(:v1);
end;
/

execute dbms_stats.gather_table_stats(null, 't1', method_opt=>'for all columns size 10');

In a variation from the previous version of the code I’ve used the “create_extended_stats()” function so that I can return the resulting virtual column name (also known as an “extension” name) into a variable that I can use later in an anonymous PL/SQL block.

Let’s now compare the values stored in the histogram for that column with the values generated by a function call that I first referenced a couple of years ago:


select
        endpoint_value
from 
        user_tab_histograms
where
        table_name = 'T1'
and     column_name = :v1
;

select 
        distinct c2, c3, 
        mod(sys_op_combined_hash(c2,c3),9999999999) endpoint_value
from t1
;

ENDPOINT_VALUE
--------------
    4794513072
    6030031083

2 rows selected.


C C ENDPOINT_VALUE
- - --------------
N Y     4794513072
Y N     6030031083

2 rows selected.

So we have a method of generating the values that Oracle should store in the histogram; now we need to generate 4 values and supply them to a call to dbms_stats.set_column_stats() in the right order with the frequencies we want to see:


declare
        l_distcnt number;
        l_density number;
        l_nullcnt number;
        l_avgclen number;

        l_srec  dbms_stats.statrec;
        n_array dbms_stats.numarray;

begin
        dbms_stats.get_column_stats (
                ownname =>null,
                tabname =>'t1',
                colname =>:v1,
                distcnt =>l_distcnt,
                density =>l_density,
                nullcnt =>l_nullcnt,
                avgclen =>l_avgclen,
                srec    =>l_srec
        );

        l_srec.novals := dbms_stats.numarray();
        l_srec.bkvals := dbms_stats.numarray();

        for r in (
                select
                        mod(sys_op_combined_hash(c2,c3),9999999999) hash_value, bucket_size
                from    (
                        select 'Y' c2, 'Y' c3, 1 bucket_size from dual
                        union all
                        select 'N' c2, 'N' c3, 1 from dual
                        union all
                        select 'Y' c2, 'N' c3, 71482 from dual
                        union all
                        select 'N' c2, 'Y' c3, 1994 from dual
                        )
                order by hash_value
        ) loop
                l_srec.novals.extend;
                l_srec.novals(l_srec.novals.count) := r.hash_value;

                l_srec.bkvals.extend;
                l_srec.bkvals(l_srec.bkvals.count) := r.bucket_size;
        end loop;

        n_array := l_srec.novals;

        l_distcnt  := 4;
        l_srec.epc := 4;

--
--      For 11g rpcnts must not be mentioned
--      For 12c is must be set to null or you
--      will (probably) raise error:
--              ORA-06533: Subscript beyond count
--

        l_srec.rpcnts := null;

        dbms_stats.prepare_column_values(l_srec, n_array);

        dbms_stats.set_column_stats(
                ownname =>null,
                tabname =>'t1',
                colname =>:v1,
                distcnt =>l_distcnt,
                density =>l_density,
                nullcnt =>l_nullcnt,
                avgclen =>l_avgclen,
                srec    =>l_srec
        );

end;

The outline of the code is simply: get_column_stats, set up a couple of arrays and simple variables, prepare_column_values, set_column_stats. The special detail that I’ve included here is that I’ve used a “union all” query to generate an ordered list of hash values (with the desired frequencies), then grown the arrays one element at a time to copy them in place. (That’s not the only option at this point, and it’s probably not the most efficient option – but it’s good enough). In the past I’ve used this type of approach but used an analytic query against the table data to produce the equivalent of 12c Top-frequency histogram in much older versions of Oracle.

A couple of important points – I’ve set the “end point count” (l_srec.epc) to match the size of the arrays, and I’ve also changed the number of distinct values to match. For 12c to tell the code that this is a frequency histogram (and not a hybrid) I’ve had to null out the “repeat counts” array (l_srec.rpcnts). If you run this on 11g the reference to rpcnts is illegal so has to be commented out.

After running this procedure, here’s what I get in user_tab_histograms for the column:


select
        endpoint_value                          column_value,
        endpoint_number                         endpoint_number,
        endpoint_number - nvl(prev_endpoint,0)  frequency
from    (
        select
                endpoint_number,
                lag(endpoint_number,1) over(
                        order by endpoint_number
                )                               prev_endpoint,
                endpoint_value
        from
                user_tab_histograms
        where
                table_name  = 'T1'
        and     column_name = :v1
        )
order by endpoint_number
;

COLUMN_VALUE ENDPOINT_NUMBER  FREQUENCY
------------ --------------- ----------
   167789251               1          1
  4794513072            1995       1994
  6030031083           73477      71482
  8288761534           73478          1

4 rows selected.


It’s left as an exercise to the reader to check that the estimated cardinality for the predicate “c2 = ‘N’ and c3 = ‘N'” is 1 with this histogram in place.

Pages