Ranges with NULLs 1: starting over

I have written a lot about ranges, mostly based on dates, and I have tried my best to avoid NULLs in “from” and “to” columns. I give up: NULLs are a fact of life and must be dealt with. This means revisiting all my previous work!

Current list of posts: 

What do I mean by “range”?

Let’s start with the SQL:2011 definition of PERIOD. A PERIOD identifies two datetime columns. The end datetime is greater than the start datetime. In the Oracle implementation, both start and end can be null.

A datetime value falls within the period if:

  • it is greater than or equal to the start datetime
    OR the start datetime is NULL
  • and it is less than the end datetime
    OR the end datetime is NULL.

A period that starts with NULL and ends with NULL will include any datetime value, including NULL!

My notion of “range” is basically an extension of this definition to more data types:

  • A range identifies a pair of values of the same data type.
  • The data type can be any scalar capable of equal, greater-than and less-than comparisons,
    though datetime types and numbers are by far the most common.
  • I tend to use “from” and “to” because “start” and “end” make me think of datetime ranges.
  • “From” is less than “to”
  • “From” and / or “to” can be NULL
  • A value falls within the range if:
    • it is greater than or equal to “from”
      OR “from” is NULL
    • and it is less than “to”
      OR “to” is NULL
  • In table definitions, I strongly recommend constraints
    • “from” < "to". This constraint should always be there.
    • “from” NOT NULL (unless a use case necessitates NULL).

Closed-Open ranges

When defining ranges, we always want to allow the possibility for two ranges to “meet”, meaning there is no gap between them yet they do not overlap.

The “closed-open” model allows this, because the “from” value belongs to the range but the “to” value does not.

If we have a range 1 to 2 and another range from 2 to 3:

  • The value 2 is not in the first range, but it is in the second.
  • There is no value from 1 to 3 that is in both ranges, so there is no overlap.
  • There is no value from 1 to 3 that is in neither range, so there is no gap.
  • In other words, every value from 1 to 3 is in exactly one of the two ranges.

This model will work will all types of range, and it makes some queries easier to write. It’s all good.

Objections to NULL in ranges

I once wrote SQL and date ranges: don’t make NULL mean something, where I gave five arguments against NULL. Here’s what I think as of today.

  1. “NULL should not mean anything”.
    I still agree with that, but what can you do?
  2. “We can use real values instead of NULL”.
    That may be the case for “from”, but not for “to” because there is no valid “to” value that is strictly greater than every valid “to” value!
  3. “Oracle doesn’t index NULLs”.
    Oracle does not index a row if all of the columns to be indexed are NULL. When this is a problem, it can be worked around.
  4. “Queries on NULLable ranges are hard”.
    We’ll see if I am up to the task…
  5. “Unique constraints don’t work with NULL”.
    Again, when this is a problem there are workarounds.

Next up: test data! If the queries are hard, I must try to test them thoroughly.



Optimistic Locking 8: double-checking with flashback

Some optimistic locking methods, including the SCN method, can detect intervening updates even when column values have not really changed. With the SCN method, we can use flashback queries to confirm that real changes occurred.

Flashback queries are evil

As soon as we mention flashback queries, some wise DBAs will immediately object that such queries harm the “shared pool”. This is a serious objection; let me explain why.

Before executing a SQL statement, Oracle must parse it and establish an execution plan: this is called the “hard parse”. Hard parsing can be an expensive operation, and it can be a bottleneck since it uses resources that must be serialized. To avoid constant parsing, Oracle stores executable forms of SQL cursors and PL/SQL programs in the “library cache”, which is part of the shared pool. This enables “parse once, execute many”, a prerequisite for good performance in OLTP databases.

Imagine a SQL statement that must be hard parsed before every execution. That one statement will use up resources and slow down parsing of other new statements. Not good. Now imagine that the “cursor” resulting from that parse goes into the library cache: the cache will get bigger and bigger and slower to navigate, until finally other cursors start getting aged out. We now have a dysfunctional database that spends its time hard parsing instead of executing statements.

Unfortunately, one of those statements that gets hard parsed every time is the AS OF flashback query :-(

To demonstrate this, I’ll run a simple AS OF query in a PL/SQL block, using the same SCN every time. Note that I flush the shared pool beforehand, so there will always be a first hard parse.

  l_scn number;
  l_cnt number;
  execute immediate 'alter system flush shared_pool';
  l_scn := dbms_flashback.get_system_change_number;
  for i in 1..10 loop
    select count(*) into l_cnt from T as of scn l_scn;
  end loop;

Now I’ll query V$SQL_SHARED_CURSOR to see what cursors could not be “shared” and had to be hard parsed over and over again.

select child_number, parse_calls, executions, reason "Reason: AS OF SCN"
from v$sql
left join V$SQL_SHARED_CURSOR using(sql_id, child_number)
where sql_text like 'SELECT COUNT(*) FROM T%';

CHILD_NUMBER PARSE_CALLS EXECUTIONS Reason: AS OF SCN                                                               
------------ ----------- ---------- --------------------------------------------------------------------------------
           0           1          1 <ChildNode><ChildNumber>0</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           1           1          1 <ChildNode><ChildNumber>1</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           2           1          1 <ChildNode><ChildNumber>2</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           3           1          1 <ChildNode><ChildNumber>3</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           4           1          1 <ChildNode><ChildNumber>4</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           5           1          1 <ChildNode><ChildNumber>5</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           6           1          1 <ChildNode><ChildNumber>6</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           7           1          1 <ChildNode><ChildNumber>7</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           8           1          1 <ChildNode><ChildNumber>8</ChildNumber><ID>21</ID><reason>Flashback cursor(1)</r
           9           1          1 

Formatted content of first Reason:
  <reason>Flashback cursor(1)</reason>

Compare this to a normal query:

  l_cnt number;
  execute immediate 'alter system flush shared_pool';
  for i in 1..10 loop
    select count(*) into l_cnt from T;
  end loop;
select child_number, parse_calls, executions, reason "Reason: no flashback"
from v$sql
left join V$SQL_SHARED_CURSOR using(sql_id, child_number)
where sql_text like 'SELECT COUNT(*) FROM T%';

------------ ----------- ---------- --------------------
           0           1         10

Flashback queries are evil – not all the time

Instead of adding AS OF SCN to every table in my query, I can use DBMS_FLASHBACK.ENABLE_AT_SYSTEM_CHANGE_NUMBER to set the “consistent read” SCN for every table in every query. Let’s see what happens with this technique:

  l_scn number;
  l_cnt number;
  execute immediate 'alter system flush shared_pool';
  l_scn := dbms_flashback.get_system_change_number - 1001;
  for i in 1..10 loop
    select count(*) into l_cnt from T;
  end loop;
select child_number, parse_calls, executions, reason "DBMS_FLASHBACK, increasing SCN"
from v$sql
left join V$SQL_SHARED_CURSOR using(sql_id, child_number)
where sql_text like 'SELECT COUNT(*) FROM T%';

------------ ----------- ---------- ------------------------------
           0          10         10

Notice that there are 10 “parse calls” here. Oracle did a “soft parse”, that is it looked in the library cache for a cursor that could be reused, and it found one. This is extra work as compared to a normal query, but not nearly as much as a hard parse. More importantly, the library cache is not flooded with useless child cursors.


It turns out that we can safely use this technique under certain conditions:

  • The FLASHBACK and SELECT privileges must be granted on whatever tables we want to query;
  • The EXECUTE privilege must be granted on DBMS_FLASHBACK;
  • The SCN parameter of ENABLE_AT_SYSTEM_CHANGE_NUMBER must be increasing! If it decreases, we get hard parses and extra child cursors.
    This should never be a problem in the “optimistic locking” use case.

Optimistic Locking 7: Restartability

When we UPDATE, Oracle may find a discrepancy between the read-consistent and “current” versions of the data. If so, Oracle “restarts” the process. Our optimistic locking solution must make that happen when appropriate.

(It’s been a long time since my last post on this subject: see Optimistic Locking 4: the Good for context.)

What happens when we UPDATE?

(To simplify, I will only talk about the default READ COMMITTED isolation level.)

If we do an UPDATE without a prior SELECT FOR UPDATE,

  • Oracle will start by doing “consistent gets” of the target rows. This produces a result set that is read-consistent.
  • Next, Oracle will do “current gets” to lock and update the target rows.
    • If there is a predicate in the WHERE clause, such as WHERE HIREDATE > TRUNC(SYSDATE, 'YY'), and the column has been changed in the meantime, then Oracle will check whether the WHERE clause is still satisfied,
    • and if not then Oracle will “restart” the update processing.

Oracle will not stop until it has gotten and locked current versions of all the rows that meet the WHERE clause criteria in a read-consistent manner.

Re-evaluate, then (maybe) Restart

To make sure restarts happen when they should, we need to know:

  1. What columns set off the re-evaluation? To set off the re-evaluation, a changed column must be referenced in the WHERE clause or in a BEFORE UPDATE...FOR EACH ROW trigger. The ORA_ROWSCN pseudocolumn does not set off a re-evaluation.
  2. Once the re-evaluation is under way, what additional data is evaluated to decide whether to restart?
    If the re-evaluation takes place, the entire WHERE clause is evaluated.
    If ORA_ROWSCN is included in the WHERE clause, its “current” value is used in any comparison.

If we want to use ORA_ROWSCN for optimistic locking, we need to reference in the WHERE clause any columns we ourselves change. That will set off the re-evaluation, which will then use ORA_ROWSCN to determine whether some other process has changed the row since we last queried it.

See Avoiding Lost Updates with ORA_ROWSCN for a demonstration. Here is an example of an UPDATE that will restart when necessary:

update emp set sal = 800, deptno = 30
where empno = 7369
and ora_rowscn <= 13865770
and coalesce('*',sal || deptno) is not null;

I decided to use COALESCE to refer to the column values because it uses short-circuit evaluation. Once it determines that ‘*’ is not null, it doesn’t bother evaluating the concatenation of the columns. Also, concatenation implicitly converts every datatype to VARCHAR2 so I can list strings, dates, timestamps, numbers and intervals without worry. To prevent the Optimizer from removing that condition someday, I could replace ‘*’ by a bind variable with a non-null value.


ORA_ROWSCN can indeed be used as a “version number” that can be checked to avoid lost updates. The UPDATE statement just has to list the appropriate columns in the WHERE clause, but it doesn’t have to check them against values from a prior SELECT statement. The only real value that needs to be checked is ORA_ROWSCN.

In the next post I’ll address the problem of “false alarms”: ORA_ROWSCN will tell us that a row has previously been updated, but it will not say whether the values have really changed in the columns we care about.

CSV from CLOB with field enclosures

After my post about extracting CSV without enclosures from a CLOB, here is my solution for CSV with enclosures. It wasn’t easy…

How Oracle parses CSV

We can parse CSV in files using SQL*Loader, but I prefer External Tables with the ORACLE_LOADER access driver. Suppose an External Table has these parameters:

records delimited by NEWLINE

From the documentation and testing, what I see is this:

  1. With these parameters, we cannot embed record delimiters within quotes.
  2. When the field has no enclosures, all characters between commas are output, whitespace or not.
  3. When there are enclosures:
    1. Whitespace is allowed and stripped before and after enclosed fields, even with NOTRIM
    2. field terminators can be embedded between the enclosures
    3. record delimiters cannot be embedded: they end the record
    4. to be enclosed, field enclosures must be doubled: they are undoubled in the output
    5. Whitespace characters are: all characters considered whitespace by REGEXP ‘\s’
      9-13, 32 (space) – and 17 others if AL32UTF8!
    6. No second enclosure > error
    7. No delimiter between enclosed fields > error
    8. If there is a field delimiter at the end of a record, it ends the field but does not start a new field.

I admit that 3.A. (whitespace OK before and after enclosures) was a surprise to me.

My objective

I want to emulate the parameters listed above but I also want decent performance. For performance reasons, I decided to simplify the rules I follow:

  • 3.A. : no whitespace allowed before or after enclosed fields
  • 3.E. : I don’t care whether a character is whitespace or not.
  • 3.G. : the error is now “second enclosure not followed by a field delimiter”
  • Instead of LOG and BAD files, I log errors and bad input with DBMS_OUTPUT.PUT_LINE
  • I ignore extra newlines in the input, but I don’t attempt to suppress output with all null fields.

The code

This code should probably be written in C, which allows direct access to each character in a string. However, anyone who can install a C program on the Database server can probably copy the CLOB to a file and use External Table functionality directly! To make my PL/SQL as fast as possible, I use a technique from my COBOL and C days: the GOTO. With this type of code, calling subprocedures would measurably increase CPU usage.

create or replace function parse_csv_json_array(
  p_str in varchar2,
  p_log integer default null
) return varchar2 authid current_user is
Objective: minimal service for parsing CSV with enclosures, similar to:
> EXCEPT: I allow no whitespace between field enclosures and field terminators.

- record delimiter = NEWLINE, field terminator = "," and field enclosure = '"'
  all are hard coded for the moment.
- record delimiter cannot be embedded in enclosed field.
- without enclosures, everything between field terminators and / or record delimiters is output
- with enclosures:
  - no whitespace allowed before or after enclosed fields
  - field terminators can be embedded between the enclosures
  - to be enclosed, field enclosures must be doubled: they are undoubled in the output
  - no second enclosure > 'Ending field enclosure not found'
  - no terminator after enclosed field > error 'Ending field enclosure not followed by field terminator'
  - If there is a field delimiter at the end of a record, it ends the field but does not start a new field.
- Instead of LOG and BAD files, I log errors and bad input with DBMS_OUTPUT.PUT_LINE
- I ignore extra newlines in the input, but I do not try to suppress output with all null fields.
- The input contains multiple records, so create an array of arrays: one inner array per record
  l_out varchar2(4000);
  i_str integer;
  max_i_str integer;
  max_i_rec integer;
  i_end integer;
  i_num_backslash integer;
  if p_str = '[]' then
    return p_str;
  end if;
  i_str := 3;
  max_i_rec := 2 - 2; -- end of "preceding record", counting 2 for length of record delimiter
  max_i_str := length(p_str) - 2;
  l_out := '[["';

  i_end := max_i_rec + 3; -- length of record delimiter + 1
  i_end := instr(p_str, '\n', i_end);
  if i_end = 0 or i_end is null then
    -- end of record and end of input
    max_i_rec := max_i_str;
    max_i_rec := i_end - 1;
    -- found \n at beginning, skip
    if max_i_rec < i_str then
      i_str := i_str + 2; -- length of record delimiter
      goto start_record;
    end if;
    -- check for '\\' before n
    i_num_backslash := 0;
    while substr(p_str, i_end-i_num_backslash-1,1) = '\' loop
      i_num_backslash := i_num_backslash + 1;
    end loop;
    if mod(i_num_backslash,2) = 1 then
      -- false alert, there was '\n' in the input and json_array made it '\\n'
      goto start_record;
    end if;
  end if;

  if substr(p_str, i_str, 2) = '\"' then
    -- enclosures, so must do one character at a time
    i_str := i_str + 2;
    goto during_enclosed;
    -- no enclosures, search for end of field in record
    i_end := instr(substr(p_str, i_str, max_i_rec - i_str + 1)||',', ',');
    l_out := l_out || substr(p_str, i_str, i_end - 1);
    i_str := i_str + i_end;
    if i_str <= max_i_rec + 1 then
      -- end of field, not end of record
      l_out := l_out || '","';
      goto start_field;
    elsif max_i_rec < max_i_str then
      -- last field of record, not last record
      i_str := max_i_rec + 3;
      l_out := l_out || '"],["';
      goto start_record;
      -- last field of record, last record of input
      l_out := l_out || '"]]';
      goto end_input;
    end if;
  end if;

  i_end := instr(p_str, '\"', i_str);
  if i_end = 0 or i_end > max_i_rec then
    dbms_output.put_line('Ending field enclosure not found, input follows:');
    dbms_output.put_line('<'||json_value(p_str, '$[0]')||'>');
    l_out := null;
    goto end_input;
  end if;
  l_out := l_out || substr(p_str, i_str, i_end - i_str);
  i_str := i_end + 2;
  if substr(p_str, i_str, 2) = '\"' then
    l_out := l_out || '\"';
    i_str := i_str + 2;
  elsif substr(p_str, i_str, 1) = ',' then
      l_out := l_out || '","';
      i_str := i_str + 1;
      goto start_field;
  elsif i_str > max_i_str then
    l_out := l_out || '"]]';
    goto end_input;
  elsif i_str > max_i_rec then
    l_out := l_out || '"],["';
    i_str := max_i_rec + 3;
    goto start_record;
    dbms_output.put_line('Ending field enclosure not followed by field terminator, input follows:');
    dbms_output.put_line('<'||json_value(p_str, '$[0]')||'>');
    l_out := null;
    goto end_input;
  end if;
  goto during_enclosed;

  return l_out;
end parse_csv_json_array;

Test results

I took five columns from DBA_OBJECTS and added one VARCHAR2(100) and one NUMBER(4). I tested with four sizes of CLOBS: 100,000 records, 200,000 records, 400,000 records and 800,000 records. I compared input with no enclosures and with all fields enclosed.

Compared to the “simple” CSV solution, this solution is about 50% to 70% slower, depending on how many fields are enclosed. However, the number of records extracted per second remains stable as volume increases.

Please let me know if this solution is of practical interest to anyone…

Extract from CLOB with JSON objects

On AskTOM, Kim Berg Hansen recently used JSON_OBJECT to parse CLOB data in “name=value” format. I added a variant based on my work with CSV data.

Kim decided to use objects instead of arrays for an excellent and important reason:

  • a JSON object is an unordered collection of name/value pairs.
  • a JSON array is an ordered list of values.

CSV data is an ordered list of values, going from left to right, so a JSON array is the obvious choice. The AskTOM question concerned name/value pairs that were not necessarily in the same order and not necessarily all there! A JSON object was the natural choice.

I’m really liking JSON more and more: not only is it simple, but it seems pretty easy to figure out how best to use it for different problems.

Instead of copying everything here, I’m going to be lazy for once and invite you to read my contribution here:


You will see some of the same ideas:

  • Use PIPE_CLOB to cut the CLOB into VARCHAR sized bites;
  • Use JSON_ARRAY on the whole bite to escape characters if needed;
  • Use REPLACE to form an overall array, but this time of objects;
  • Then use JSON_TABLE to generate one row per object and one column per value.

Hope this helps…

Extract CSV from CLOB with JSON arrays

Marc Bleron blogged about CSV CLOBs and JSON_TABLE two years ago. Here’s my contribution to improve on a great idea.

“Simple” CSV

The Oracle Database does a fine job of parsing CSV data in flat files, using the External Tables facility. Unfortunately, this service is not available for CSV data in VARCHARs or CLOBs. Marc showed that JSON_TABLE (and XML_TABLE) can parse “simple” CSV if it is reformatted. What is “simple”?

CSV data consists of records and fields within records.

  • Records are delimited by NEWLINE (or some other string).
  • Fields are terminated by commas (or some other string),
  • If necessary, some or all fields can be enclosed by double quotes " (or some other string).

When Marc says “simple”, he means that fields are never enclosed. This is important, because enclosed fields may contain the double quote (provided it is present twice in a row) and / or the comma. With “simple” CSV, we know that all commas are true field terminators and we don’t have to replace "" with " .

“Simple” also means that there is no trimming of whitespace: you get everything between the commas.

Finally, Marc assumes there is no terminator after the last field of the record, even though Oracle allows it.

So, “simple” CSV has delimited records with terminated fields that are never enclosed. There is no trimming of whitespace and the last field in the record is not terminated.

My contribution

  • First of all, I break the CLOB into VARCHAR2 bites using the pipelined table function PIPE_CLOB (as explained in my previous post).
  • Then I remove any field terminator that immediately precedes a record delimiter.
  • Then I use JSON_ARRAY over the entire VARCHAR2 in case some characters need to be escaped.
  • Then I do several REPLACES such that:
    • each record becomes a JSON array of string values, and
    • those arrays are included in one overall array.
  • Finally, I use JSON_TABLE to break the overall array into rows and the inner arrays into columns.

Note that everything before the COLUMNS clause in JSON_TABLE is generic, because the inner arrays can contain any number of elements.

To demonstrate, here is a CLOB containing data from the EMP table, with a trailing comma added:


And the code:

with last_term_removed as (
  select replace(column_value, ','||chr(10), chr(10)) str
  from table(pipe_clob((select c from t), 3500))
, json_data as (
  select '[' ||
    replace (
      replace(json_array(str), ',', '","'),
    || ']' jstr  
  from last_term_removed
select sql_data.*
from json_data j, json_table(
   j.jstr, '$[*]'
   columns empno    number        path '$[0]'
         , ename    varchar2(128) path '$[1]'
         , job      varchar2(128) path '$[2]'
         , mgr      number        path '$[3]'
         , hiredate date          path '$[4]'
         , sal      number        path '$[5]'
         , comm     number        path '$[6]'
         , deptno   number        path '$[7]'
) sql_data;
7369 SMITH CLERK 7902 1980-12-17 800 20
7499 ALLEN SALESMAN 7698 1981-02-20 1600 300 30
7521 WARD SALESMAN 7698 1981-02-22 1250 500 30
7566 JONES MANAGER 7839 1981-04-02 2975 20
7654 MARTIN SALESMAN 7698 1981-09-28 1250 1400 30
7698 BLAKE MANAGER 7839 1981-05-01 2850 30
7782 CLARK MANAGER 7839 1981-06-09 2450 10
7839 KING PRESIDENT 1981-11-17 5000 10
7844 TURNER SALESMAN 7698 1981-09-08 1500 0 30
7876 ADAMS CLERK 7788 1987-05-23 1100 20
7900 JAMES CLERK 7698 1981-12-03 950 30
7902 FORD ANALYST 7566 1981-12-03 3000 20
7934 MILLER CLERK 7782 1982-01-23 1300 10


Scalability: yes!

When Marc tested reading CLOBs directly, performance went bad as the CLOB increased in size:

Rows Seconds
91336 2.2
182672 4.3
365344 9.4
730688 22
1461376 840


In my tests with very similar data, the number of rows per second remains about the same:

91336 1.374 66475 68517 -3
182672 2.6 70258 68517 2.5
365344 5.35 68289 68517 -.3
730688 10 73069 68517 6.6
1461376 22 66426 68517 -3.1


Read CLOBs fast with less memory

Reading a big CLOB is like trying to eat a burger all at once: it will take you forever if you don’t choke. Why not cut that CLOB into bite-size chunks? It’s faster, uses less memory – and it’s good table manners…

Marc Bleron has blogged about parsing CSV data in CLOBs. As the CLOB got bigger, the parsing time went way up. I decided to write a pipelined table function that would return the CLOB in VARCHAR2-sized bites, cutting at record boundaries.

  • By default, the maximum bite size is 4000 bytes. You can make it less if you need some room to use REPLACE.
  • Also by default, the record delimiter is the NEWLINE of your operating system, but you can change it.

Again, the intent is to return as many entire records as will fit in the “bite”. If the input has a record that is longer than the maximum bite size, then this function will raise an exception.

One of the pains of CLOBs is that LENGTH() and SUBSTR() deal with characters only: the LENGTHB() and SUBSTRB() functions deal with bytes, but they are limited to VARCHAR2. Fortunately, VARCHAR2 in PL/SQL can be much longer, so I read 4000 characters into a buffer and then I cut off any records that overflow the 4000-byte boundary.

UPDATE 2018-06-15: I renamed “rec_term” to “rec_delim” because external table definitions say “records delimited by”. I now raise an exception if a bite does not contain the record delimiter.

UPDATE 2018-06-16 and -17: bug fixes for edge cases.

UPDATE 2018-06-28: better error handling. The maximum record length (in bytes) was actually 4000 including the record delimiter. It is now 4000 excluding the record delimiter.

create or replace function pipe_clob (
  p_clob in clob,
  p_max_lengthb in integer default 4000,
  p_rec_delim in varchar2 default '
) return sys.odcivarchar2list pipelined authid current_user as
Break CLOB into VARCHAR2 sized bites.
Reduce p_max_lengthb if you need to expand the VARCHAR2
in later processing.
Last record delimiter in each bite is not returned,
but if it is a newline and the output is spooled
the newline will come back in the spooled output.
Note: this cannot work if the CLOB contains more than
<p_max_lengthb> consecutive bytes without a record delimiter.
  l_amount integer;
  l_offset integer;
  l_buffer varchar2(32767 BYTE);
  l_out varchar2(32767 BYTE);
  l_buff_lengthb integer;
  l_occurence integer;
  l_rec_delim_length integer := length(p_rec_delim);
  if p_max_lengthb > 4000 then
    raise_application_error(-20001, 'Maximum record length (p_max_lengthb) cannot be greater than 4000.');
  elsif p_max_lengthb < 10 then
    raise_application_error(-20002, 'Maximum record length (p_max_lengthb) cannot be less than 10.');
  end if;
  if p_rec_delim is null then
    raise_application_error(-20003, 'Record delimiter (p_rec_delim) cannot be null.');
  end if;
  /* This version is limited to 4000 byte output, so I can afford to ask for 4001
      in case the record is exactly 4000 bytes long.
  l_amount := p_max_lengthb + l_rec_delim_length;
  l_offset := 1;
  while l_amount = p_max_lengthb + l_rec_delim_length loop
    exception when no_data_found then
      l_amount := 0;
    if l_amount = 0 then
    elsif lengthb(l_buffer) <= p_max_lengthb then pipe row(rtrim(l_buffer, p_rec_delim)); exit; end if; l_buff_lengthb := p_max_lengthb + l_rec_delim_length; l_occurence := 0; while l_buff_lengthb > p_max_lengthb loop
      l_occurence := l_occurence + 1;
      l_buff_lengthb := instrb(l_buffer,p_rec_delim, -1, l_occurence) - 1;
    end loop;
    if l_buff_lengthb < 0 then
      if l_amount = p_max_lengthb + l_rec_delim_length then
          'Input clob at offset '||l_offset||' for lengthb '||p_max_lengthb||' has no record delimiter'
      end if;
    end if;
    l_out := substrb(l_buffer, 1, l_buff_lengthb);
    pipe row(l_out);
    l_offset := l_offset + nvl(length(l_out),0) + l_rec_delim_length;
    l_amount := p_max_lengthb + l_rec_delim_length;
  end loop;

In my laptop tests, read time increased linearly with the number of records. If you try it, let me know how it works out.