Solution: The Collection in the Collection

The crucial clues to solve the task I found at Adrian Billington’s Blog. It’s an older post, but still valid: introduction to bulk PL / SQL Enhancements in 9i .
The code below should be self explaining with the notes.
To show how to the data, I have added an output via DBMS_OUTPUT.

— First you have to create the type for nested collection

CREATE OR REPLACE type emp_t
AS
  object
  (
    EMPNO NUMBER(4),
    ENAME VARCHAR2(10),
    SAL  NUMBER(7,2),
    COMM NUMBER(7,2) 
  )
/
create or replace TYPE tbl_emp  AS TABLE OF emp_t
/

 

DECLARE
  CURSOR c1
  IS
    SELECT deptno,
      dname,
      CAST (MULTISET( SELECT empno,ename,sal, comm FROM emp e WHERE e.deptno= d.deptno
                     ) AS tbl_emp  
           ) as emps -- I need an alias to be able to refer the embedded Collection
  FROM dept d;

  TYPE tbl_dept IS TABLE OF c1%ROWTYPE;
  depts tbl_dept ;
BEGIN
   OPEN c1;
   FETCH c1 BULK COLLECT INTO depts;
   CLOSE c1;
   FOR i IN 1..depts.COUNT
   LOOP
      DBMS_OUTPUT.PUT_LINE(depts(i).deptno);
      FOR j IN  1.. depts(i).emps.COUNT
      LOOP
        DBMS_OUTPUT.PUT_LINE('***'||depts(i).emps(j).ename);
      END LOOP;
   END LOOP;
END;
/

 

Advertisements

The Collection in The Collection

We had to rewrite a commonly used PL / SQL function. The code suffered heavily from the Context Switch. He literally was called millions of times. We set ourselves the goal of having a single Bulk Collect to read all data from the database . I give here a code skeleton that I have adapted to the scott / tiger schema.

DECLARE
   CURSOR c1
   IS
   SELECT deptno, dname
     FROM dept;
   CURSOR c2 (p_deptno NUMBER)
       IS
       SELECT empno,ename,sal, comm
         FROM emp
        WHERE deptno= p_deptno;
BEGIN
   FOR c1rec IN C1
   LOOP
      FOR c2rec IN c2(c1rec.deptno)
      LOOP
         NULL;
      END LOOP;
   END LOOP;
END;
/

 


In reality, of course, there was complex logic instead of null in the loop. I worked jointly with a developer of the Software vendors on the problem. We did not have enough time to understand this code. We decided to only formally convert the code and to keep the loops. As mentioned, we wanted to read all the data in a single step in a nested array.
There should be an outer department collection that contains an inner employee collection as a nested table. There are other solutions (as Bryn Llewellyn thankfully pointed out one in the comment), but the nested table solution merges smoothly with the legacy code. As in our example also in reality the amount of data per query was so small that it all would easily fit into one array .

How does the select look like?

 

Solution: Is the migration database slower?

In OTN was once asked whether the optimizer takes the size of the buffer cache into account swhen creating the execution plan . Under the usual disclaimer (it could in the cost review with incorporated), I would say: no. How should the size of the cache be considered anyway? If I have a larger buffer cache, is more likely that a segement X is in the cache, so I’m doing an index access? (I argue here analogous to parameters OPTIMZER_INDEX_CACHING.) That’s not a good logic. The size Buffer caches is no a safe indicator of whether a particular segment actually in the cache or not. As will be seen later, the size of the cache may be even an argument for the full table scan.

To create an execution plan, the selectivity estimate is the key factor.

However, how an existing execution plan is being executed, is another matter. Here the execution enigine makes the final decision.

For example, a full table scan (FTS) is often executed doing “direct path reads”. This means that inevitable a physical I / O will to be done. (For the database, it is a physical I / O even if the result comes from the cache file system.)

But if the execution engine detects that a table is almost fully represented in the buffer cache, it can switch the execution from a “direct path read” to a “scattered read” The “scattered read” in contrast to “direct path read” will benefit from the buffer cache.

Tanel Poder wrote a good, but a somewhat older  explanation of this concept : Optimizer statistics-driven direct path read decision for full table scans .

In short, there are two prerequisites that must be fulfilled that the execution engine switches to “scattered reads”.

  1.   The buffer cache must be large enough to cache the segment which is the subject of the FTS
  2.  The segment must be actually almost completely cached


The first point can be so easily checked. The analysis showed the following: The table was about 25GB large in production and in the migration.
The buffer cache was currently 55 GB in production. In the migration it was only 5 GB in size.
Now we can conclude with certainty: In the production, the table colud be cached to 100% of it’s size. Theoretically, the runtime engine can therefore initiate a scan in memory. In the miragtion this is certainly not possible.
Was the table now cached to almost 100%  in production?
To clarify this question the below statement, which determines the number of cached blocks of a table can be used.


SELECT sum(num_buf)
FROM X$KCBOQH x, dba_objects o
WHERE x.obj#=o.object_id
AND object_name='my table'
;
If the so calculated number is almost identical to the number of blocks the table (according to my tests> 90%, but no guarantee)  scattered reads” will be used.

The analysis of the productive database revealed that the table was really for the most part in the Buffer Cache. I could also show the rapid FTS in memory did sometimes not happen due to the lack of cached blocks. We now double-checked our result and increased the buffer cache of the migration environment. Then we loaded the table into the cache.
After that the FTS was fast also in mirgration.

Some remarks:

Frits Hoogland published a newer, deeper analysis of the subject :

Investigating the full table direct path / buffered decision

 Because of several observations I think the exadata storage server can apply similar algorithms in relation to the SmartScan. A SmartScan can be converted into a scan of the storage server flash cache. I do not have enough evidence to be 100% certain however.

Is the migration database slower?

My customer set up a database in a virtual environment to support a migration. Initial tests show that the migration database is many times slower than the production database. The tests focus on a particular query.

Here is an excerpt from an AWR of the migration database. You can see the relevant data from the sql command that you should investigate as part of the list “SQL ordered by elapsed time”.

apsed Time (s)

Executions

Elapsed Time per Exec (s)

%Total

%CPU

%IO

199.24

1

199.24

98.16

3.93

96.72

For comparison, the same data from the production database:

Elapsed Time (s)

Executions

Elapsed Time per Exec (s)

%Total

%CPU

%IO

11.02

1

11.02

65.95

99.98

0.0

What makes you curious? What working hypothesis would you choose to start your investigation and what would you check? Hint: This sql statement we investigate is a count which scans only one table. The execution plan is identical in both cases, it is in each case a full table scan.

Solution: Why is the new Hardware slower

As Martin Preiss already pointed out on Twitter Tanel has already documented LOBREAD SQL Trace entry in Oracle 11.2 that the entries in the trace like LOBREAD are really referring to LOBs and that they probably came in with version 11.2.0.2.
Thus we knew that there was a schema change on the database on the new hardware. The next decisive hint was this piece in the raw trace (e.g. spotted again by Martin Berger):

FETCH #25:c=1154407,e=1152124,p=0,cr=102603,cu=0,mis=0,r=101 ,dep=0,og=1,tim=650755949521

versus


FETCH #601010888:c=31200,e=22483,p=0,cr=3706,cu=50,mis=0, r=1 ,dep=0,og=1,plh=3621104505,tim=39783214696

Thus it looks like with the LOB we are retrieving one row at a time and without the lob we are doing an array fetch with 101 rows at a time. How is this possible when the program code is identical?
Can the existence of a LOB somehow inhibit array fetch?
Well, indeed it does, as documented here: Single Row Fetch from a LOB  (Thanks Hemant).
Stefan Köhler pointed out, that it also depends on the driver: single row fetch depends on client.
Thus, when we change the LOB column to a varchar2 the new hardware was faster than the old one

Why is the new Hardware slow?

You buy new hardware to be faster. This is a normal expectation. But what if the new hardware is slower than the old one? The speculation about the cause went wildly. Since I had to wait for this assignment due to holiday time, the tension was high as the examination could finally begin.

A quick check showed that the new hardware was no slower than the old one. The decisive hint brought a raw trace. I show only the important part.

On the old hardware of trace looked like this:


FETCH #25:c=1154407,e=1152124,p=0,cr=102603,cu=0,mis=0,r=101,dep=0,og=1,tim=650755949521

Cursor # 25 is a large Select that runs slowly. On the new hardware trace looked as follows:


FETCH #601010888:c=31200,e=22483,p=0,cr=3706,cu=50,mis=0,r=1,dep=0,og=1,plh=3621104505,tim=39783214696
WAIT #601010888: nam='SQL*Net message from client' ela= 171 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=39783217398
LOBGETLEN: c=0,e=3,p=0,cr=0,cu=0,tim=39783217416
WAIT #0: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=39783217423
WAIT #0: nam='SQL*Net message from client' ela= 117 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=39783217546
WAIT #0: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=39783217560
LOBREAD: c=0,e=12,p=0,cr=1,cu=0,tim=39783217567

Cursor # 601010888: corresponds cursor # 25 on the old hardware. The database on the new hardware is version 11, the database on the old hardware version 10.
Obviously except for the version there is at least one other difference between the two databases. What is it? What is effect of this difference?
Both databases are accessed via the exact same program, which is implemented using MS Visual Studio.

Solution: Why is no alias needed?

Well, let’s first answer the supplementary question. It seems that in the SQL types can not be used directly.
As Martin Berx rightly remarked in the comment to the English blog, you must first cast to type.
In this case:

TREAT(CUST_ADDRESS AS CUST_ADDRESS_TYP).STATE_PROVINCE = 'FL'

This it is a useful little trick that might help on occasions.
Now you can execute the statement. My guess was that the natural join ensures that you compelled an alias.
In fact

SELECT order_id,
SUM(quantity*unit_price)
FROM order_items
HAVING SUM(quantity *unit_price) > ALL
(SELECT SUM(quantity*unit_price)
FROM customers
JOIN orders o USING (customer_id)
JOIN order_items oi ON (oi.order_id=o.order_id )
WHERE TREAT(CUST_ADDRESS AS CUST_ADDRESS_TYP).STATE_PROVINCE = 'FL'
GROUP BY order_id
)
GROUP BY order_id

results in:

ERROR in Line 10:
ORA-00918: column ambiguously defined

You need to define a prefix for Order ID in the group by :


SELECT order_id,
SUM(quantity*unit_price)
FROM order_items
HAVING SUM(quantity *unit_price) > ALL
(SELECT SUM(quantity*unit_price)
FROM customers
JOIN orders USING (customer_id)
JOIN order_items ON (order_items.order_id=orders.order_id )
WHERE TREAT(CUST_ADDRESS AS CUST_ADDRESS_TYP).STATE_PROVINCE = 'FL'
GROUP BY o.order_id
)
GROUP BY order_id

In my opinion, it should always work as in the case of the natural join. Since you have already defined that the columns are equal, the optimizer should just choose any.

 

Why do we not need an alias?

Here’s a question from OTN. I’m curious what you have to say. (Please do not check the answer to OTN.)


select order_id, sum(quantity*unit_price)
from order_items
having sum(quantity*unit_price) > ALL
(select sum(quantity*unit_price)
from customers
join orders using (customer_id)
join order_items using (order_id )
where STATE_PROVINCE = 'FL'
group by order_id)
group by order_id;

The question is: Why do we not need an alias for order_id in the “group by order_id” clause in the sub query, although the column appears in two tables?
Thus, to make it a bit more difficult, I have built a small additional hurdle. STATE_PROVINCE is in the OE schema for version 12 (for other versions I have not tested) component of a type variable. The query is not running as shown here. How should we modify the query so that it will work?

Solution: Where did the downgrades come from ?

As Stefan already correctly ponted out, the downgrades did origin from parallel sessions that Toad issued. Under special circumstance, the invoking client process has long ceased to exist, but the database sessions stays. As a result, we came under increasing pressure as the number of these sessions grew slowly. There were fewer and fewer new px be allocated because the pool is not infinite.
As a measure, however, we have a profile is activated, that kills inactive session after a certain time.
The second reason was that the parameter parallel_adaptive_multi_user was set to true. This ensured that about a half of the parallel session pools could be used for parallel processing. The parameter was obviously absurd in a data warehouse. However, the local DBA’s insisted not to change the default. For the first time I was faced with the absurd idea that a default is something of a taboo. Instead,they set the parameter was parallel_threads_per_cpu triumphantly to the default of 2. However, the OS had reported all threads as CPUs, so now twice as many CPUs were reported as really existed, as twice as many threads.
As for the downgrades three meaningless values ​​in the parameters have happy neutralized each other. Why, however, the Resource Manager was still running, I honestly do not get. 😉

Where did the downgrades come from?

In a data warehousewe we faced a increasing number of downgrades of parallelism. It seemed to be getting worse by the day. Indeed, it turned out that the number of free available parallel processes showed a constant decrease. A closer examination of PX used showed that many had Toad as source.
How was it possible that the number of parallel sessions that originated from Toad grew constantly?
On the other hand, there was still a comfortable number of PX is available. How was it possible that we were still seeing downgrades? What parameters should we check out?