#DOAG2016: Bug !

I did it: in my proudest moment at DOAG2016, at the very end of my second presentation, I presented a “brilliant” SQL query with a bug in it. After agonizing for awhile, I finally saw that the correction was simple and didn’t change the timing.

If you were there, the bug was in slide 21 where I joined two tables with overlapping ranges. For some reason I thought I should order the processing by the end of the range (which sometimes works) instead of by the start of the range (which always works – so far).

I have uploaded the corrected presentation to the DOAG website and to slideshare. I will be blogging about the content, so stay tuned.

Here is the link to the presentation on slideshare:

Apologies for the bug…

#DOAG2016: Ranges, Ranges Everywhere!

Today I did my second presentation at DOAG2016. It was at 9:00 so I got to sleep in ;)

The room was huge but there were enough people that I didn’t feel too lonely.

The room and the technical help were top notch, and again there were questions at just the right time to remind me of things I might have left out!

[Update 2016-11-20: there was a bug on slide 21 ! Instead of “order by end_n” it should be “order by start_n”. I have updated the slide on the DOAG website and on slideshare. My test cases were not thorough; my apologies…]

As promised, I put the presentation on Slideshare. Here is the link:

I will be blogging about some of the content later on, so stay tuned…

As always, please download the slides and play them in Powerpoint so you can see the animations.

Thanks to DOAG for the invite!

#DOAG2016: Advanced Row Pattern Matching

DOAG2016 started today at 8:30, and I did too. There were so many great presentations at the same time as mine, I was surprised and pleased to get a nice audience.

The room and the technical help were top notch, and the questions came at just the right time to remind me of things I might have left out!

As promised, I put the presentation on Slideshare. Here is the link:

http://www.slideshare.net/StewAshton1/advanced-row-pattern-matching

I will be blogging about some of the content later on, so stay tuned…

If you want to start with an “unadvanced” presentation, go here first:

http://www.slideshare.net/StewAshton1/row-pattern-matching-in-oracle-database-12c

As always, please download the slides and play them in Powerpoint so you can see the animations.

Thanks to DOAG for the invite!

I’m speaking at #DOAG2016 and #ukoug_tech16

This year I get to speak about advanced SQL twice at two different conferences. My first presentation is about row pattern matching with MATCH_RECOGNIZE and my second deals with ranges – including but not limited to Temporal Validity ranges.

#DOAG2016

DOAG2016 goes from November 15th through 18th in Nuremberg, Germany. I speak early in the morning on the 15th and 16th, at the same time as a huge number of great speakers. If you come to my talks anyway, I guarantee you will get great seats!

Meet your Match: Advanced Row Pattern Matching November 15th, 8:30

Ranges, Ranges Everywhere! November 16th, 9:00

#UKOUG_Tech16

UKOUG Tech 16 goes from December 5th through 7th in Birmingham, UK – with bonus content on “Super Sunday” afternoon, December 4th. I speak on Sunday and Monday afternoon, again at the same time as many great speakers.

Meet your Match: Advanced Row Pattern Matching December 4th,16:10 (4:10 P.M.)

Ranges, Ranges Everywhere! December 5th, 14:10 (2:10 P.M.)

Hope to see your there !

 

OTN Appreciation Day: Tom Kyte #ThanksOTN

To answer Tim Hall’s call to appreciate OTN, I could have written about my go-to feature, the MATCH_RECOGNIZE clause, or my go-to development tool, Oracle SQL Developer. Instead, I’d like to salute my go-to “Oracle Technology” guy for over 10 years, Tom Kyte.

It was 2005. After almost 25 years in IT, I knew something about a lot of technologies, but not relational databases. I was trying to figure out why a product I had gotten my company to buy was running slowly, and the problem seemed to be in its use of the Oracle database. So, I got a colleague to show me how to write a SELECT statement, went to the Internet to learn how to trace and analyze Oracle performance problems and dove into the deep end…

It took a few weeks of Googling before I finally landed on asktom.oracle.com. Unbelievable! No other product on earth had such an expert, ready to explain any concept and solve any problem at the drop of a mouse, and for free. Sitting at Tom’s virtual feet almost daily, I not only solved my original problem but started a new career as an in-house SQL development specialist, contributor to asktom and OTN and later conference speaker.

Here is my first sentence posted on asktom 10 years ago: “As a first-time “reviewer”, MANY thanks to Tom whose invaluable insights have helped me find incredible performance boosts, although I am not a trained DBA.” I haven’t stopped thanking him since.

So many of us have learned from Tom about “worst practices”, the problems with cars that won’t start, WHEN OTHERS THEN NULL, and analytics that rock!

Tom, whatever you may be doing during or after your sabbatical, may your heart be ever warmed by the gratitude of the many you have helped and even inspired. You rock too!

Splitting Strings: PL/SQL

Marc Bleron and Todd Hershiser gave me some very valuable feedback through their comments on my recent “Splitting Strings” posts. The big news is: PL/SQL beats SQL!

fn:tokenize

XQUERY provides some builtin functions prefixed by “fn”. fn:tokenize is equivalent to ora:tokenize except that it doesn’t break when the delimiter is absent from the input string: it just returns the string. Marc says fn:tokenize is supported, and it does work in 12.1.0.2, but the official documentation says “not supported”. I have asked Marc for more information.

This is not a big deal. With fn:tokenize, there would simply be a bit less typing.

Escaping the delimiter

Todd pointed out that tokenize splits a string based on a regular expression, and some common delimiters (like ‘|’) have special meaning in regular expressions. As a result, I now recommend to simply escape the delimiter with a backslash ‘\’.

PL/SQL

Todd provided a string splitting function in PL/SQL and claimed it is clearly faster than ora:tokenize. He is right!

I wrote a function similar to his and compared it to the “tokenize” solution. Here is the function:

create or replace function string_tokenize(
  p_string in varchar2,
  p_delim in varchar2
)
return sys.odcivarchar2list pipelined
as
  i_prev_pos integer := 1;
  i_pos integer;
  i_max_pos integer := length(p_string) + 1;
  i_delim_length integer := length(p_delim);
begin
  loop
    i_pos := instr(p_string, p_delim, i_prev_pos);
    if i_pos = 0 then
      i_pos := i_max_pos;
    end if;
    pipe row(substr(p_string, i_prev_pos, i_pos - i_prev_pos));
    exit when i_pos = i_max_pos;
    i_prev_pos := i_pos + i_delim_length;
  end loop;
  return;
end string_tokenize;
/

By the way, I tested this function with and without the PRAGMA UDF clause introduced in 12. I found no difference in performance in this case. Here is my final test harness:

set serveroutput on
declare
  l_num number;
  l_timestamp timestamp;
  l_plsql_secs number;
  l_tokenize_secs number;
  l_num_substrings number := 10;
  procedure do(p_sql in varchar2) is
  begin
    dbms_output.put_line(p_sql);
    execute immediate p_sql;
  end do;
begin
  select count(*) into l_num from user_tables where table_name = 'T';
  if l_num > 0 then
    do('drop table t purge');
  end if;
  do('create table t(id number, str varchar2(4000)) cache');
  insert into t
  select level, to_char(level,'fm000000000')||',000000002,000000003,000000004,000000005,000000006,000000007,000000008,000000009,000000010'
  from dual
  connect by level <= 10000;
  commit;
  dbms_output.put_line('Substrings' || chr(9) || 'tokenize' || chr(9) || 'PL/SQL');
  for i in 1..10 loop
    select count(*) into l_num from t;
    
    l_timestamp := localtimestamp;
    select count(column_value) into l_num from (
      select id, column_value from t, table(string_tokenize(str, ','))
    );
    l_plsql_secs := extract(second from localtimestamp - l_timestamp);
    
    l_timestamp := localtimestamp;
    select count(subs) into l_num from (
      select id, subs from t, xmltable(
        'if (contains($X,",")) then ora:tokenize($X,"\,") else $X' 
        passing str as X columns subs varchar2(4000) path '.')
    );
    l_tokenize_secs := extract(second from localtimestamp - l_timestamp);
    dbms_output.put_line(l_num_substrings || chr(9) || l_tokenize_secs || chr(9) || l_plsql_secs);
    update t set str =
    str || ',000000001,000000002,000000003,000000004,000000005,000000006,000000007,000000008,000000009,000000010';
    l_num_substrings := l_num_substrings + 10;
    commit;
  end loop;
end;
/

Notice that I keep the same number of input rows here, whereas in my previous tests I kept the same number of output rows. My “tokenize” solution scales OK, but the PL/SQL function is much faster and scales even better.

In this case a combined SQL + PL/SQL solution beats the best pure SQL solution.

Splitting Strings: Proof!

In my previous post I used XMLTABLE and ora:tokenize to split a comma delimited string. Now I’ll apply that technique to multiple rows, and show that it’s faster than other methods.

Test data

In my tests, I configure the length of the substring, the number of substrings per row and the total number of rows I should get as output. Each input string is unique because it starts with the ID of the row: this way I avoid any caching that might reduce the number of function calls.

drop table t purge;

create table t cache as
with parms as (
  select 9 str_len, 5 num_subs, 100000 num_rows from dual
)
, str_row as (
  select listagg(n,',') within group(order by n) str
  from (
    select lpad(level+1,str_len,'0') n from parms
    connect by level <= num_subs-1
  )
)
select level id,
  lpad(level,str_len,'0') ||','||str str
from parms, str_row
connect by level <= num_rows/num_subs;

select * from t where id <= 11;
ID STR
1 000000001,000000002,000000003,000000004,000000005
2 000000002,000000002,000000003,000000004,000000005
3 000000003,000000002,000000003,000000004,000000005
4 000000004,000000002,000000003,000000004,000000005
5 000000005,000000002,000000003,000000004,000000005
6 000000006,000000002,000000003,000000004,000000005
7 000000007,000000002,000000003,000000004,000000005
8 000000008,000000002,000000003,000000004,000000005
9 000000009,000000002,000000003,000000004,000000005
10 000000010,000000002,000000003,000000004,000000005
11 000000011,000000002,000000003,000000004,000000005

 

Notice the CACHE keyword when I create the table. Before my tests, I access the entire table to make sure it is all in the buffer cache.

The “substr+instr” technique

This is the technique from my “New, Improved IN Lists” post. All I need to do is apply it to multiple rows.

One way to do that is to use the 12c LATERAL() clause. If you are not yet in 12c, try
TABLE(CAST(MULTISET(...) AS SYS.ODCIVARCHAR2LIST))

select a.id, b.subs from t a,
lateral(
  select substr(
    str,
    pos + 1,
    lead(pos,1,4000) over(order by pos) - pos - 1
  ) subs
  from (
    select instr(str, ',', 1, level) pos
    from dual
    connect by
      level <= length(str) - nvl(length(replace(str, ',', '')), 0) + 1
  )
) b;

The “tokenize” technique

This one is easy to adapt to multiple rows:

select id, subs from t, xmltable(
  'if (contains($X,",")) then ora:tokenize($X,"\,") else $X'
  passing str as X
  columns subs varchar2(4000) path '.'
);

[Update 2016-08-02: in a comment, Todd Hershiser points out that the second parameter in ora:tokenize is a regex expression. In order to use a regex “metacharacter” like “|” as a delimiter, I need to escape it with a backslash. I decided to put the backslash in everywhere since it doesn’t do any harm.

On the other hand, if the delimiter is ‘&’ then this solution cannot be used.]

The “regexp_substr” technique

This technique is fairly popular, no doubt because it is concise. For multiple rows, I use the “CONNECT BY ... PRIOR SYS_GUID()” technique that I explained in Chunking tables 7: prior sys_guid().

select id, regexp_substr (str, '[^,]+', 1, level) subs
from t
connect by level <= length (regexp_replace (str, '[^,]+')) + 1
and id = prior id
and prior sys_guid() is not null;

Test Results

As you can see here, the “substr+instr” solution is slightly better than “tokenize” when there are fewer than 8 substrings per string. As the number of substrings increases, the “tokenize” solution scales much better. As for the “regexp_substr” solution, it is on average 20 times slower than “tokenize”.