Avoiding Lost Updates with ORA_ROWSCN

Applications today use “optimistic locking” to avoid lost updates. Tom Kyte thought of using ORA_ROWSCN for optimistic locking, then decided it didn’t work (see ORA_ROWSCN for optimistic locking). I now think it does!

If you don’t understand what an SCN is, and how it helps with read consistency, this post may go over your head. Sorry I can’t take the time to try explaining all that here…

Test Data and Use Case

Oracle generally keeps one ORA_ROWSCN per block, since that is enough (with the UNDO) to ensure read consistency. However, we can use ROWDEPENDENCIES when creating the table to make room for one SCN per row.

I’m going to create the venerable EMP table with ROWDEPENDENCIES:

CREATE TABLE EMP (
    EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
    ENAME VARCHAR2(10),
    JOB VARCHAR2(9),
    MGR NUMBER(4),
    HIREDATE DATE,
    SAL NUMBER(7,2),
    COMM NUMBER(7,2),
    DEPTNO NUMBER(2)
) rowdependencies;

INSERT INTO EMP VALUES
(7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
INSERT INTO EMP VALUES
(7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);
INSERT INTO EMP VALUES
(7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);
INSERT INTO EMP VALUES
(7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20);
INSERT INTO EMP VALUES
(7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30);
INSERT INTO EMP VALUES
(7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,NULL,30);
INSERT INTO EMP VALUES
(7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,NULL,10);
INSERT INTO EMP VALUES
(7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,NULL,20);
INSERT INTO EMP VALUES
(7839,'KING','PRESIDENT',NULL,to_date('17-11-1981','dd-mm-yyyy'),5000,NULL,10);
INSERT INTO EMP VALUES
(7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30);
INSERT INTO EMP VALUES
(7876,'ADAMS','CLERK',7788,to_date('13-JUL-87', 'dd-mm-rr')-51,1100,NULL,20);
INSERT INTO EMP VALUES
(7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,NULL,30);
INSERT INTO EMP VALUES
(7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,NULL,20);
INSERT INTO EMP VALUES
(7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,NULL,10);
commit;

select distinct ora_rowscn from emp;
ORA_ROWSCN
13848205

 

In honor of Oracle’s 40-year birthday, SMITH is going to submit a batch job to give everyone a 10% raise, including himself. At the same time, MILLER is doing a reorganization: he assigns SMITH to department 30, using an OLTP application that updates both department number and salary.

Lost Updates

Here is what happens when an application neglects “optimistic locking”:

BATCH MILLER
– A1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– A2: MILLER queries SMITH and sees committed data

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 800 20
– A3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369;

– A4: the batch job commits

commit;

-The application sees “1 rows updated.”
– A5: MILLER commits

commit;

– A6: SMITH discovers he was the only one who didn’t get a raise!

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 800 30

 

You see what happened here? Oracle told MILLER that SMITH’s salary was 800, because that was the latest committed value. The batch update had not committed, so the new salary was invisible to any session but the batch session itself. Since the OLTP application updated the salary as well, it replaced the new salary with the old one.

For details on lost updates, see the documentation on Data Concurrency and Consistency.

Optimistic Locking using column values

To avoid lost updates, the application has to check during its update that someone else has not changed the data in the meantime.

BATCH MILLER
– B1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– B2: MILLER queries SMITH and sees committed data

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 800 20
– B3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369
and sal = 800 and deptno = 20;

– B4: the batch job commits

commit;

 -The application sees “0 rows updated.”
and explains to MILLER that someone else
already changed the data.
– B5: SMITH got his raise!

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 880 20

 

You may be wondering why Step B3 didn’t find the row it wanted to update. That is because of “restartability”:

  • Oracle did a “consistent get” to get the latest committed values, where SAL was 800
  • but then it did a “current get” to apply MILLER’s change to the current version of the data. This is when it blocked, waiting for the batch job to commit and release the lock on that row.
  • When the “current get” finally happened, Oracle noticed that the SAL value had changed, so it restarted the update: it did another consistent get, at which time it found no rows that matched the WHERE clause – so it did not do the update.

For a detailed explanation of restartability, see these posts by Tom Kyte:

http://tkyte.blogspot.fr/2005/08/something-different-part-i-of-iii.html

http://tkyte.blogspot.fr/2005/08/part-ii-seeing-restart.html

http://tkyte.blogspot.fr/2005/09/part-iii-why-is-restart-important-to.html

Why not use ORA_ROWSCN?

Tom discovered that putting ORA_ROWSCN in the WHERE clause did not trigger a restart. In our scenario:

BATCH MILLER
– C1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– C2: MILLER queries SMITH and sees committed data

select empno, sal, deptno, ora_rowscn from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO ORA_ROWSCN
7369 800 30 13857633

 

– C3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369 and ora_rowscn = 13857633;

– C4: the batch job commits

commit;

-The application sees “1 rows updated.”

 

No reason to go any further: the update did not see that the batch job had changed the ORA_ROWSCN.

Making ORA_ROWSCN work!

Here is what I stumbled on: in the WHERE clause, referring to ORA_ROWSCN will not trigger a restart, but referring to SAL, which changed, will trigger a restart – and once the restart is triggered, then the new “consistent get” will see the new value of ORA_ROWSCN. We don’t need the old value of SAL, we just need to mention it. For example:

BATCH MILLER
– D1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– D2: MILLER queries SMITH and sees committed data

select empno, sal, deptno, ora_rowscn from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO ORA_ROWSCN
7369 800 30 13865770

 

– D3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369
and ora_rowscn = 13865770
and coalesce(‘*’,sal||deptno) is not null;

– D4: the batch job commits

commit;

 -The application sees “0 rows updated.”
and explains to MILLER that someone else
already changed the data.

 

I decided to use COALESCE to refer to the column values because it uses short-circuit evaluation. Once it determines that ‘*’ is not null, it doesn’t bother evaluating the concatenation of the columns. Also, concatenation implicitly converts every datatype to VARCHAR2 so I can list strings, dates, timestamps, numbers and intervals without worry.

Conclusion

ORA_ROWSCN can indeed be used as a “version number” that can be checked to avoid lost updates. The UPDATE statement just has to list the appropriate columns in the WHERE clause, but it doesn’t have to check them against values from a prior SELECT statement. The only real value that needs to be checked is ORA_ROWSCN.

There are some limitations and caveats here:

  1. If your table has not been created with ROWDEPENDENCIES, ORA_ROWSCN will change every time any row in the block has changed. The table has to be “recreated” with ROWDEPENDENCIES for this to work.
  2. You cannot use this method with Index Organized Tables! If you create such a table with ROWDEPENDENCIES, then SELECT ORA_ROWSCN… will return NULL.
  3. You might wonder whether a “delayed block cleanout” will cause the ORA_ROWSCN to change without the data having changed. I don’t see how that could be a problem, since the SELECT done to get the data (and the ORA_ROWSCN) will do the cleanout as part of its processing.

#DOAG2016: Ranges, Ranges Everywhere!

Today I did my second presentation at DOAG2016. It was at 9:00 so I got to sleep in ;)

The room was huge but there were enough people that I didn’t feel too lonely.

The room and the technical help were top notch, and again there were questions at just the right time to remind me of things I might have left out!

[Update 2016-11-20: there was a bug on slide 21 ! Instead of “order by end_n” it should be “order by start_n”. I have updated the slide on the DOAG website and on slideshare. My test cases were not thorough; my apologies…]

As promised, I put the presentation on Slideshare. Here is the link:

I will be blogging about some of the content later on, so stay tuned…

As always, please download the slides and play them in Powerpoint so you can see the animations.

Thanks to DOAG for the invite!

#DOAG2016: Advanced Row Pattern Matching

DOAG2016 started today at 8:30, and I did too. There were so many great presentations at the same time as mine, I was surprised and pleased to get a nice audience.

The room and the technical help were top notch, and the questions came at just the right time to remind me of things I might have left out!

As promised, I put the presentation on Slideshare. Here is the link:

http://www.slideshare.net/StewAshton1/advanced-row-pattern-matching

I will be blogging about some of the content later on, so stay tuned…

If you want to start with an “unadvanced” presentation, go here first:

http://www.slideshare.net/StewAshton1/row-pattern-matching-in-oracle-database-12c

As always, please download the slides and play them in Powerpoint so you can see the animations.

Thanks to DOAG for the invite!

I’m speaking at #DOAG2016 and #ukoug_tech16

This year I get to speak about advanced SQL twice at two different conferences. My first presentation is about row pattern matching with MATCH_RECOGNIZE and my second deals with ranges – including but not limited to Temporal Validity ranges.

#DOAG2016

DOAG2016 goes from November 15th through 18th in Nuremberg, Germany. I speak early in the morning on the 15th and 16th, at the same time as a huge number of great speakers. If you come to my talks anyway, I guarantee you will get great seats!

Meet your Match: Advanced Row Pattern Matching November 15th, 8:30

Ranges, Ranges Everywhere! November 16th, 9:00

#UKOUG_Tech16

UKOUG Tech 16 goes from December 5th through 7th in Birmingham, UK – with bonus content on “Super Sunday” afternoon, December 4th. I speak on Sunday and Monday afternoon, again at the same time as many great speakers.

Meet your Match: Advanced Row Pattern Matching December 4th,16:10 (4:10 P.M.)

Ranges, Ranges Everywhere! December 5th, 14:10 (2:10 P.M.)

Hope to see your there !

 

Splitting Strings: PL/SQL

Marc Bleron and Todd Hershiser gave me some very valuable feedback through their comments on my recent “Splitting Strings” posts. The big news is: PL/SQL beats SQL!

fn:tokenize

XQUERY provides some builtin functions prefixed by “fn”. fn:tokenize is equivalent to ora:tokenize except that it doesn’t break when the delimiter is absent from the input string: it just returns the string. Marc says fn:tokenize is supported, and it does work in 12.1.0.2, but the official documentation says “not supported”. I have asked Marc for more information.

This is not a big deal. With fn:tokenize, there would simply be a bit less typing.

Escaping the delimiter

Todd pointed out that tokenize splits a string based on a regular expression, and some common delimiters (like ‘|’) have special meaning in regular expressions. As a result, I now recommend to simply escape the delimiter with a backslash ‘\’.

PL/SQL

Todd provided a string splitting function in PL/SQL and claimed it is clearly faster than ora:tokenize. He is right!

I wrote a function similar to his and compared it to the “tokenize” solution. Here is the function:

create or replace function string_tokenize(
  p_string in varchar2,
  p_delim in varchar2
)
return sys.odcivarchar2list pipelined
as
  i_prev_pos integer := 1;
  i_pos integer;
  i_max_pos integer := length(p_string) + 1;
  i_delim_length integer := length(p_delim);
begin
  loop
    i_pos := instr(p_string, p_delim, i_prev_pos);
    if i_pos = 0 then
      i_pos := i_max_pos;
    end if;
    pipe row(substr(p_string, i_prev_pos, i_pos - i_prev_pos));
    exit when i_pos = i_max_pos;
    i_prev_pos := i_pos + i_delim_length;
  end loop;
  return;
end string_tokenize;
/

By the way, I tested this function with and without the PRAGMA UDF clause introduced in 12. I found no difference in performance in this case. Here is my final test harness:

set serveroutput on
declare
  l_num number;
  l_timestamp timestamp;
  l_plsql_secs number;
  l_tokenize_secs number;
  l_num_substrings number := 10;
  procedure do(p_sql in varchar2) is
  begin
    dbms_output.put_line(p_sql);
    execute immediate p_sql;
  end do;
begin
  select count(*) into l_num from user_tables where table_name = 'T';
  if l_num > 0 then
    do('drop table t purge');
  end if;
  do('create table t(id number, str varchar2(4000)) cache');
  insert into t
  select level, to_char(level,'fm000000000')||',000000002,000000003,000000004,000000005,000000006,000000007,000000008,000000009,000000010'
  from dual
  connect by level <= 10000;
  commit;
  dbms_output.put_line('Substrings' || chr(9) || 'tokenize' || chr(9) || 'PL/SQL');
  for i in 1..10 loop
    select count(*) into l_num from t;
    
    l_timestamp := localtimestamp;
    select count(column_value) into l_num from (
      select id, column_value from t, table(string_tokenize(str, ','))
    );
    l_plsql_secs := extract(second from localtimestamp - l_timestamp);
    
    l_timestamp := localtimestamp;
    select count(subs) into l_num from (
      select id, subs from t, xmltable(
        'if (contains($X,",")) then ora:tokenize($X,"\,") else $X' 
        passing str as X columns subs varchar2(4000) path '.')
    );
    l_tokenize_secs := extract(second from localtimestamp - l_timestamp);
    dbms_output.put_line(l_num_substrings || chr(9) || l_tokenize_secs || chr(9) || l_plsql_secs);
    update t set str =
    str || ',000000001,000000002,000000003,000000004,000000005,000000006,000000007,000000008,000000009,000000010';
    l_num_substrings := l_num_substrings + 10;
    commit;
  end loop;
end;
/

Notice that I keep the same number of input rows here, whereas in my previous tests I kept the same number of output rows. My “tokenize” solution scales OK, but the PL/SQL function is much faster and scales even better.

In this case a combined SQL + PL/SQL solution beats the best pure SQL solution.