Author: lotharflatz

Solution: Can a select statement create a transaction Lock?

I would like to thank Andrew Sayer for the correct solution. A select statement without a for update clause can not set a transaction lock. The transaction lock must therefore come from another statement of the same session.

As long as no commit has taken place, the lock is upright. We would therefore have to go back in time using active session history to determine where the transaction lock originated.

Basically, it is a good idea to speed up the long-running Select statement. This allows the transcation lock to be held for a shorter period of time, since the commit point can be reached more quickly.

Advertisements

Can a select statement create a transaction Lock?

Recently I had to research a “enq: TM contention” wait event. I queried active session history (ASH) an found blocking session’s id.

I queried ASH again and found what the blocking session was doing right at the time when the other session was waiting on the “enq: TM contention” lock. At this time there was a long running select statement active in the blocking session.

How is that possible? Can a select create a lock? What must have happened ? Would it actually help to tune the select?

For a further hint you might wat to read Arup Nanda’s blog on transaction locks.

enq: TX row lock contention and enq:TM contention

Solution:Index Rebuild: Magic or Voodoo?

Of course my readers have solved the problem. I would like to thank Jure Bratina, Andrew Sayer and Martin Berger for their contributions.
In the following you will see the entire test case in a commented form. IMHO most of it is self-explanatory.
The following little scripts represents the initial set up. The schema is, as I said, the SH example scheme.

alter table sales add  (sparse varchar2(300)); 
update sales set sparse = rpad('sometext',300, '*');
commit;
create index sparse_idx on sales (sparse);
select blocks from user_segments where segment_name ='SPARSE_IDX';

Let us now examine the size of the index segment:

select blocks from user_segments where segment_name ='SPARSE_IDX';

   BLOCKS
---------
    44032

select leaf_blocks from user_indexes where index_name ='SPARSE_IDX';

LEAF_BLOCKS
-----------
      41766

Here is an update similar to the one I found in the original database:

update sales set sparse=NULL;

918843 rows updated.

exec dbms_stats.gather_table_stats(user,'SALES');

And how does it affect the statistics and the segments?

select leaf_blocks from user_indexes where index_name ='SPARSE_IDX';

LEAF_BLOCKS
-----------
          0
select blocks from user_segments where segment_name ='SPARSE_IDX';

    BLOCKS
----------
     44032

The optimizer only checks the index statistics. It does not check the segment statistics. The optimizer therefore believes that the index tiny.
All the non-null values of the column sparse must be in the index. Therefore it is optimal, from the optimizer’s perspective, to scan the supposedly small index.
However, the index segment still has it’s full size. An index rebuld is required to resolve the issue.

Index Rebuild: Magic or Voodoo?

I actually don’t like to write about todays topic. However, I hope you will agree this story is worth to be told.

Via Freelist somebody asked if an Index rebuild could be occaisionally usefull. There was a case at hand, where index rebuild seemed to be usefull after a big delete. I sometimes got annoyed by the popular myth, that regular index rebuild would be helpfull. This is one of the more popular database myths as branded by Mr. Index, Richard Foote.

Thus, I could not resist the opportunity to trash the myth I don’t like and answered that there is no such case. I ignored the warning wisper in my head telling me that things are never that easy.
Of course, the unavoidable happened Jonathan Lewis corrected me stating that there are corner cases were index rebuild can be useful ( e.g. after a big delete).

As if this would not be enough shortly thereafter I came accross this little statement below. (I presenting a test case that makes use of the SH schema.)

select time_id from sales where sparse is not null;

The Exection Plan with runtime statistics looks like that:


-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |            |      1 |        |      0 |00:00:00.04 |   41770 |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| SALES      |      1 |      1 |      0 |00:00:00.04 |   41770 |
|*  2 |   INDEX FULL SCAN                          | SPARSE_IDX |      1 |      1 |      0 |00:00:00.04 |   41770 |
-------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("SPARSE" IS NOT NULL)

Why did the optimizer choose a full index scan? Actually a full table scan would be much better. The estimate (E-Rows) is correct. The statistcs are current.

Can you guess what is missing from my test case? Hint: I researched the DML on this table and I found an update statement.

SOLUTION: A Strange Condition in the Where Clause

The strange condition is automatically generated by the database.
The cause is the DDL optimization, which are available starting with version 11G.
When you insert an additional column in a table, the database will not necessarily create a physical column.
It may generate a “DDL optimized” column instead. When specifying a default, this column can also be defined not null.
That way the database saves the effort to enhance each record by one column.
It only generates an entry is in the dictionary, which of course is much faster.
Each row of data may contain a real value for the “DDL optimized” column if a value was inserted.
If no value was inserted, the default value is used instead.
Since it is possible that no value exists, the database needs to replace the column names by the formula.
Here is a simple example:

create table x (y number);
insert into x select rownum from dual connect by rownum < 1000000;
commit;
alter table x add ( z number default 1 not null);
select 1 from x where z=1;

If we look at the execution plan of the query, we notice that the column name Z was replaced by the formula:


(NVL("Z",1)=1)

Here the link to Carlos’s Blog: Interesting case where a full table scan is chosen instead of an index

A Strange Condition in the Where Clause

It makes me proud to learn that Carlos Sierra is following my blog. I knew Carlos from my time at Oracle, even though we only recently met for the first time. I see Carlos as a man who acts rather than complains. One of the people that make a difference.
With my next example I will show that it is also an astute analyst.
I recently saw a condition in an execution plan that was not present in the sql statement. I wanted to know how the condition came from.
I give a simple example: Consider a table x that looking like this:


SQL> desc x
Name Null? Typ
----------------------------------------- -------- -----------------
Y NUMBER
Z NOT NULL NUMBER

Let us do a query on x:


select count(*) from x where z=1;

The execution plan looks like this:


Plan hash value: 989401810

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   420 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    13 |            |          |
|*  2 |   TABLE ACCESS FULL| X    |  1144K|    14M|   420   (2)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(NVL("Z",1)=1)

Why is does the execution plan show this strange condition? You will find the answer in Carlos Sierra’s blog.
One more hint: It has something to do with default values. Good luck.

Solution: Parallel Activity just drops

As you remember we had a hash join outer with dominant null values. Thus, the parallel process dealing with the null values got most of the work to do.
How would we deal with it? Let’s us back up a bit.
A null value is a special case. We are comparing in our join a null value versus a primary key that is defined not null. The null values will certainly not generate a match when joining. Therefore any of the parallel slaves could do the compare, as long as it is guaranteed not to generate match.
It is therefore beneficial to convert the null values into some other value, as long as we to fulfill two prerequisites:

  1. we have to convert the null values in such way that the get evenly distributed across all parallel slaves
  2. we must make sure that the converted values do not generate a match

It took me a long time to figure this. First I tried something like negative values, but they were still going to the same slave.
The issue is that our newly generated key needs to be roughly in range with the pk of the build table to be spread across all slaves.
I could have tried to figure the range in a with clause reading the data dictionary, but could not get myself to do something I considered uncool.
Instead I tried something else, which is honestly a bit of a hack, but it looks way better. I used the pk of the outer table to generate a not null widespread key.
I also took advantage of the fact that the id’s were integer and changed the join condition to:

FROM T_TAB1 T1
LEFT OUTER JOIN T_TAB2 T2
ON ( NVL(T2.X_ID, T2_id +0.5)    = T1.X_ID)

That is certainly somewhat tricky. The only thing I can say in favor is that is kind of worked.

After I applied my solution the activity tab looked like that:

Compare this to the previous picture. It is quite impressive.
Well, in version 12 the database should be able to deal with skew join distribution. But it still does not work for all cases of the as Randolf Geist states.
If someone does not like my solution he might have a look into _fix_control 6808773, but I do not warrant either solution. 😉

 

Parallel Activity just drops

In one of my DWH POCs for Oracle’s Real World Performance Group we did see an unexpected drop of activity. The customer wanted to know why it happened.
This is how sql monitor’s activity tab looked:

I investigated carefully and found out that the reason was in both cases an outer join that looked liked that

FROM T_TAB1 T1
LEFT OUTER JOIN T_TAB2 T2
ON T2.X_ID    = T1.X_ID

I found that 90% of all values in T1.X_ID were Null values. The distribution for the join was hash. Thus, the parallel slave dealing with null values got most of the work and was for quite some time the only one running. Table T1 was the biggest table, to broadcast it was out of question.
(You may also compare Randolf’s article on skew parallel distribution: Parallel Execution Skew – Demonstrating Skew).

Can you now re-code the join condition such that the join is done by all parallel slaves? (This was version 11G. In version 12 the optimizer should take care of it.)  Further assumption: table T2 has a one column primary key call T2_id.

 

Solution: The Collection in the Collection

The crucial clues to solve the task I found at Adrian Billington’s Blog. It’s an older post, but still valid: introduction to bulk PL / SQL Enhancements in 9i .
The code below should be self explaining with the notes.
To show how to the data, I have added an output via DBMS_OUTPUT.

— First you have to create the type for nested collection

CREATE OR REPLACE type emp_t
AS
  object
  (
    EMPNO NUMBER(4),
    ENAME VARCHAR2(10),
    SAL  NUMBER(7,2),
    COMM NUMBER(7,2) 
  )
/
create or replace TYPE tbl_emp  AS TABLE OF emp_t
/

 

DECLARE
  CURSOR c1
  IS
    SELECT deptno,
      dname,
      CAST (MULTISET( SELECT empno,ename,sal, comm FROM emp e WHERE e.deptno= d.deptno
                     ) AS tbl_emp  
           ) as emps -- I need an alias to be able to refer the embedded Collection
  FROM dept d;

  TYPE tbl_dept IS TABLE OF c1%ROWTYPE;
  depts tbl_dept ;
BEGIN
   OPEN c1;
   FETCH c1 BULK COLLECT INTO depts;
   CLOSE c1;
   FOR i IN 1..depts.COUNT
   LOOP
      DBMS_OUTPUT.PUT_LINE(depts(i).deptno);
      FOR j IN  1.. depts(i).emps.COUNT
      LOOP
        DBMS_OUTPUT.PUT_LINE('***'||depts(i).emps(j).ename);
      END LOOP;
   END LOOP;
END;
/

 

The Collection in The Collection

We had to rewrite a commonly used PL / SQL function. The code suffered heavily from the Context Switch. He literally was called millions of times. We set ourselves the goal of having a single Bulk Collect to read all data from the database . I give here a code skeleton that I have adapted to the scott / tiger schema.

DECLARE
   CURSOR c1
   IS
   SELECT deptno, dname
     FROM dept;
   CURSOR c2 (p_deptno NUMBER)
       IS
       SELECT empno,ename,sal, comm
         FROM emp
        WHERE deptno= p_deptno;
BEGIN
   FOR c1rec IN C1
   LOOP
      FOR c2rec IN c2(c1rec.deptno)
      LOOP
         NULL;
      END LOOP;
   END LOOP;
END;
/

 


In reality, of course, there was complex logic instead of null in the loop. I worked jointly with a developer of the Software vendors on the problem. We did not have enough time to understand this code. We decided to only formally convert the code and to keep the loops. As mentioned, we wanted to read all the data in a single step in a nested array.
There should be an outer department collection that contains an inner employee collection as a nested table. There are other solutions (as Bryn Llewellyn thankfully pointed out one in the comment), but the nested table solution merges smoothly with the legacy code. As in our example also in reality the amount of data per query was so small that it all would easily fit into one array .

How does the select look like?