Optimistic Locking 6: which ORA_ROWSCN?

Continuing my series on using ORA_ROWSCN to achieve optimistic locking, let’s zoom in on ORA_ROWSCN itself. I’ve been asking myself this question:

  • ORA_ROWSCN can be maintained at the block level or the row level. What is the difference and which one should we use?

Previously in this series

I want to implement optimistic locking in a two-phase database API:

  • The “query API” queries data and sends it to the user.
  • The “update API” receives changes from the user to be applied to the database.

Here is how ORA_ROWSCN fits into the overall scenario:

  1. Client calls the query API to display data to the user.
  2. The query API returns the data for the user.
    1. All the rows that might be updated are SELECTed here.
    2. The SELECT ensures that ORA_ROWSCN will be less than or equal to the “read-consistent SCN”.
    3. The data is read-consistent, and the “read-consistent SCN” is returned along with the data.
  3. Client calls the update API to update the data.
    1. The “read-consistent SCN” from step 2. is sent back along with the updates.
  4. The update API compares each ORA_ROWSCN to the”read-consistent SCN”.
    1. If any ORA_ROWSCN is greater, then some intervening update has occurred
      and the update cannot continue. All lost updates have been avoided.

I’m going to zoom in on the statements in bold red (2B and 4). How they work depends on whether ORA_ROWSCN tracks the SCN at the block level or the row level.

Block Level or Row Level?

By default, ORA_ROWSCN reflects the “commit SCN” of the most recent change to any row in a block. If a table is created with ROWDEPENDENCIES, the size of each row is increased in order to store an SCN with each row. We can see the difference with this simple illustration:

SQL> create table t as select level n from dual connect by level <= 2;

Table T created.

SQL> select n, ora_rowscn from t;

N  ORA_ROWSCN
-- ----------
 1 7949719
 2 7949719

SQL> update t set n = 3 where n = 2;

1 row updated.

SQL> commit;

Commit complete.

SQL> select n, ora_rowscn from t;

N  ORA_ROWSCN
-- ----------
 1 7949731
 3 7949731

SQL> drop table t purge;

Table T dropped.

SQL> create table t rowdependencies as select level n from dual connect by level <= 2;

Table T created.

SQL> select n, ora_rowscn from t;

N  ORA_ROWSCN
-- ----------
 1 7949749
 2 7949749

SQL> update t set n = 3 where n = 2;

1 row updated.

SQL> commit;

Commit complete.

SQL> select n, ora_rowscn from t;

N  ORA_ROWSCN
-- ----------
 1 7949749
 3 7949760

As you can see, with block-level tracking every row in a block has the same ORA_ROWSCN, even for rows that were not changed in the most recent transaction.

Optimistic Locking with block-level SCN

In my previous post on the SCN, I was working with block-level SCNs. We saw that after a SELECT, all the data blocks that were read consistent now have ORA_ROWSCN values no greater than the read-consistent SCN. This is exactly what statement 2B above is saying.

Unfortunately, for statement 4 things are not so simple. If one row in a block has just been changed, the ORA_ROWSCN will increase for the other rows as well. When my “update API” checks one of the other rows, it will say the row has been updated when in fact it has not.

As a result, the SCN comparison will always detect updated rows, but it will sometimes say a row was updated when it was not. We probably need some confirmation before saying the row was truly changed.

Optimistic Locking with row-level SCN

If we really want to depend on the SCN comparison alone, we should create the table with ROWDEPENDENCIES. There are some limitations when we do this: for example, table compression is not possible. The advantage is that our “statement 4” is now true: the ORA_ROWSCN of a row will only increase when that row was updated in a more recent transaction.

As I mentioned in my previous post on the SCN, it is possible that a row-level ORA_ROWSCN will be greater than the block-level SCN used to get read-consistent data. However, it is hard to imagine a real-life scenario where the row-level ORA_ROWSCN would be greater than the “read-consistent SCN” used in my query API.

Which ORA_ROWSCN should we use?

It depends, of course.

  • If we are free to create our tables with ROWDEPENDENCIES, and if it is enough to detect conflicting updates at the row level, then comparing ORA_ROWSCN to the “read-consistent” SCN should be sufficient for our needs.
  • If we are updating tables without ROWDEPENDENCIES, or if we want to detect real changes to values in the row,
    • then ORA_ROWSCN will tell us which rows have certainly not changed, so we can update them directly.
    • For the small number of rows that might have changed, we need to write code to check the values.
      Even then, our “read-consistent SCN” will help us do that – as I’ll explain in a later post.

Next up: Tom Kyte was wrong – about being wrong! ;)

 

Advertisements

Optimistic ORA_ROWSCN 2: the SCN

In my previous post, I covered lost updates. Before saying how ORA_ROWSCN can help avoid them, I need to talk about the SCN. It’s hard to be simple and correct about such a central element of the Oracle database, but I’ll try.

Aside from the documentation, I am relying on Oracle Core by Jonathan Lewis, chapter 3, along with this blog post and Jonathan’s answer to my question on OTN.

The Commit SCN

The documentation defines the SCN, or “System Change Number”, as a “database ordering primitive. The value of an SCN is the logical point in time at which changes are made to a database.” It also calls it a “stamp that defines a committed version of a database at a point in time. Oracle assigns every committed transaction a unique SCN.”

In other words:

  • Every change to data in the database is made as part of a transaction.
  • The changes in a transaction become effective when the transaction is committed.
  • Every committed transaction gets a unique SCN.
  • The SCN increments at each commit, which allows it to “order the database” in time.
  • Every “commit SCN” “defines a committed version of the database”, so the SCN is like a version number that applies to the entire database.

The “commit SCN” of a transaction is precisely the SCN that was assigned to a transaction on commit.

[UPDATE 2017-07-24: Jonathan Lewis just pointed me to a blog post by Frits Hoogland where he showed that it is possible for two commits to share the same SCN. To quote Frits: “both commits … have the same SCN. At this point I think it’s fitting to theorise why this is happening. I think this is happening and makes sense because the table is protected for consistency at another level in the database, which is the locking mechanisms (TX locks for rows, TM locks for segments), which guarantees that two transactions at the same time do not compromise data consistency. And because of that, it’s not a problem to batch them with the same SCN.”

Frits seems to have shown an error, or at least a simplification, in the documentation. Either that or “unique” means every transaction has just one “commit SCN”, even if one “commit SCN” can be associated with two simultaneous transactions.

Fortunately for me, this refinement doesn’t invalidate the rest of this post.]

[UPDATE 2017-08-01: thanks to Connor McDonald on asktom for posting this quote from the Concept Guide:

“Several events may share the same SCN, which means that they occurred at the same time in the database. Every transaction has an SCN. For example, if a transaction updates a row, then the database records the SCN at which this update occurred. Other modifications in this transaction have the same SCN.”

This seems to confirm that two transactions can share the same SCN.]

The SCN and Read Consistency

In Oracle, queries are “read consistent”, which means “The data returned by a query is committed and consistent for a single point in time.” This point in time is defined by – you guessed it – an SCN. I’ll call that SCN the “read-consistent SCN”. By default Oracle uses the SCN that was current when the query started, but within read-only or serializable transactions it uses the SCN when the transaction began.

Now comes the hard part.

When we submit a query, Oracle gets the data blocks it needs to read the rows of interest. The problem is that each data block may contain uncommitted data, or data that was committed after the “read-consistent SCN”. To determine that, Oracle consults the ITL (Interested Transaction List) in the block header, and if necessary the UNDO that the ITL entries point to.

There are three case to consider:

  1. All the ITL entries show committed transactions having “commit SCNs” that are less than or equal to the “read-consistent” SCN.
  2. All the ITL entries show committed transactions, but there is at least one “commit SCN” greater than the “read-consistent” SCN.
  3. At least one ITL entry shows a transaction that has not committed.

Case 1. is the most frequent and the simplest: none of the data is more recent than the read-consistent SCN, so the query may use the block as is.

Case 2. is not as simple as you might think. The “commit SCN” in an ITL entry may be an upper bound: it cannot be less than the real “commit SCN”, but it could be greater. In this case Oracle cross-checks with the “transaction table slot” in the UNDO segment header. If it finds that the transaction was actually committed before the read-consistent SCN, it will use the block in the query, and it will lower the “commit SCN” stored in the ITL entry.

Case 3. is similar to case 2. The ITL entry says “transaction not committed”, but that may be because Oracle did not have time to update the ITL entry before the block was flushed from memory. Oracle must cross-check with the “transaction table slot” in the UNDO segment header. If it says “actually, the transaction has committed”, then the ITL entry is updated with an upper bound “commit SCN”. This is known as “delayed block cleanout”.

We can see case 3. and case 2. at work in the following example, using two independent sessions:

  • Session 1
create table T1 (a number, ts timestamp, val number);
create table T2 (a number, ts timestamp, val number);

begin
    insert into T1 values (1, localtimestamp, 0);
    insert into T1 values (2, localtimestamp, 0);
    commit;
end;
/
begin
    insert into T2 values (3, localtimestamp, 0);
    commit;
end;
/
select dbms_flashback.get_system_change_number current_scn from dual;

CURRENT_SCN
-----------
   20031509

update T1 set ts = localtimestamp, val = 1 where a = 2;
alter system flush buffer_cache; -- (1)
commit;

begin
    for i in 1 .. 5000 loop
        update T2 set val = val + 1;
        commit; -- (2)
    end loop;
end;
/
select dbms_flashback.get_system_change_number current_scn from dual;

CURRENT_SCN
-----------
   20036606

(1) Notice that we update T1, then flush the block from memory before the COMMIT. This prevents Oracle from doing the “commit cleanout” in memory that would normally put the real “commit SCN” in the ITL entry.

(2) After the update to T1, we do 5000 transactions to T2 in order to increment the SCN.

  • Session 2
set transaction read only; -- This simulates a long-running query
  • Session 1
begin
    for i in 1 .. 5000 loop
        update T2 set val = val + 1;
        commit;
    end loop;
end;
/
select dbms_flashback.get_system_change_number current_scn,
ora_rowscn from T1 t;

CURRENT_SCN ORA_ROWSCN
----------- ----------
   20041709   20041359
   20041709   20041359

We use the pseudocolumn ORA_ROWSCN to query the “last change SCN” of the block, which corresponds to the highest SCN in the ITL. Notice the value is the same for both rows, even though we updated one row later than the other. Notice also that ORA_ROWSCN shows an SCN value that is clearly less than the current SCN (so we know the data is read consistent with the query) but much greater than the real commit SCN.

What happened here was what I called case 3. above. The data block was flushed from memory before the commit, so Oracle had to go to the UNDO to find out what happened. It discovered the transaction was committed some time ago, and updated the ITL entry to an SCN it could find efficiently and that was “good enough” for the query being executed. This SCN is still almost 10,000 more than the real “commit SCN”.

  • Session 2
select dbms_flashback.get_system_change_number current_scn,
ora_rowscn from T1 t;

CURRENT_SCN ORA_ROWSCN
----------- ----------
   20041714   20036604
   20041714   20036604

This is a example of what I called case 2. above. Oracle got the data block, found an SCN that was too recent for the query, and went to the UNDO to find out if the real commit SCN was lower. Once it found an SCN that was not greater than the read-consistent SCN (at the beginning of the transaction), it updated the ITL entry (together with the “last change SCN”) and used the data block in the query.

  • Session 1
select dbms_flashback.get_system_change_number current_scn,
ora_rowscn from T1 t;

CURRENT_SCN ORA_ROWSCN
----------- ----------
   20041714   20036604
   20041714   20036604

This is just to show that the ITL entry was well and truly changed for all sessions, not just for Session 2.

To summarize what we have seen: whenever a query accesses a data block, Oracle has to determine whether the block has changed earlier or later than the read-consistent SCN of the query. It may have to go to the UNDO to find that out, but if it does it updates one or more ITL entries so the next query doesn’t have to do the same work over again. At the end of the query, all the data blocks that were read consistent now have ORA_ROWSCN values no greater than the read-consistent SCN.

By the way, flashback queries work the same way. Using AS OF SCN nnnn, I can lower an ORA_ROWSCN several times in a row and get pretty close to the real commit SCN.

ROWDEPENDENCIES

I said the pseudocolumn ORA_ROWSCN tells us what is the most recent SCN in the ITL. That is true if the table was created with NOROWDEPENDENCIES, which is the default. If we create the table using ROWDEPENDENCIES, Oracle makes room for an SCN for each row. For this kind of table, ORA_ROWSCN shows us the SCN stored with each row, not the overall SCN we discussed up until now. I’ll have more to say about this in a later blog post.

(Updated 2017-09-12) If we went through the example above using a table with ROWDEPENDENCIES, we would see different results for case 2. The first time an ITL entry gets a “commit SCN” (whether “upper bound” or not), the affected rows get that same “commit SCN”. In case 2, the ITL entry gets a new, lower “commit SCN” – but it is not copied to the affected row.

As a result, it is conceivable that the ORA_ROWSCN of a row will be greater than the most recent “commit SCN” stored in the data block.

Summary

  • Oracle orders all transactions in time by assigning each one a unique SCN on commit: this is called the “commit SCN”. [UPDATE 2017-07-24: it may be possible for two transactions to share the same “commit SCN” (and therefore be simultaneous), but no transaction will ever have more than one “commit SCN”.]
  • Every query returns results that were consistent as of an SCN that I call the “read-consistent SCN”: it corresponds to the start of the query (or the start of the transaction).
  • During query processing, Oracle compares the two SCNs and returns results where the “commit SCN” is never greater than the “read-consistent SCN”.

Sometimes a data block has incomplete information in its ITL, and Oracle consults the UNDO to determine whether the “commit SCN” is not greater than the “read-consistent SCN”. If it is, Oracle will update the ITL to save work for the next query.

The end result is that, after the query finishes, every ITL entry for eligible blocks will contain an SCN less than or equal to the “read-consistent SCN” of the query. This is crucial for doing optimistic locking with ORA_ROWSCN.

More later…

Avoiding Lost Updates with ORA_ROWSCN

Applications today use “optimistic locking” to avoid lost updates. Tom Kyte thought of using ORA_ROWSCN for optimistic locking, then decided it didn’t work (see ORA_ROWSCN for optimistic locking). I now think it does!

If you don’t understand what an SCN is, and how it helps with read consistency, this post may go over your head. Sorry I can’t take the time to try explaining all that here…

Test Data and Use Case

By default, Oracle keeps one ORA_ROWSCN per block. However, we can use ROWDEPENDENCIES when creating the table to make room for one SCN per row.

I’m going to create the venerable EMP table with ROWDEPENDENCIES:

CREATE TABLE EMP (
    EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
    ENAME VARCHAR2(10),
    JOB VARCHAR2(9),
    MGR NUMBER(4),
    HIREDATE DATE,
    SAL NUMBER(7,2),
    COMM NUMBER(7,2),
    DEPTNO NUMBER(2)
) rowdependencies;

INSERT INTO EMP VALUES
(7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
INSERT INTO EMP VALUES
(7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);
INSERT INTO EMP VALUES
(7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);
INSERT INTO EMP VALUES
(7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20);
INSERT INTO EMP VALUES
(7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30);
INSERT INTO EMP VALUES
(7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,NULL,30);
INSERT INTO EMP VALUES
(7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,NULL,10);
INSERT INTO EMP VALUES
(7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,NULL,20);
INSERT INTO EMP VALUES
(7839,'KING','PRESIDENT',NULL,to_date('17-11-1981','dd-mm-yyyy'),5000,NULL,10);
INSERT INTO EMP VALUES
(7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30);
INSERT INTO EMP VALUES
(7876,'ADAMS','CLERK',7788,to_date('13-JUL-87', 'dd-mm-rr')-51,1100,NULL,20);
INSERT INTO EMP VALUES
(7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,NULL,30);
INSERT INTO EMP VALUES
(7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,NULL,20);
INSERT INTO EMP VALUES
(7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,NULL,10);
commit;

select distinct ora_rowscn from emp;
ORA_ROWSCN
13848205

 

In honor of Oracle’s 40-year birthday, SMITH is going to submit a batch job to give everyone a 10% raise, including himself. At the same time, MILLER is doing a reorganization: he assigns SMITH to department 30, using an OLTP application that updates both department number and salary.

Lost Updates

Here is what happens when an application neglects “optimistic locking”:

BATCH MILLER
– A1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– A2: MILLER queries SMITH and sees committed data

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 800 20
– A3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369;

– A4: the batch job commits

commit;

-The application sees “1 rows updated.”
– A5: MILLER commits

commit;

– A6: SMITH discovers he was the only one who didn’t get a raise!

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 800 30

 

You see what happened here? Oracle told MILLER that SMITH’s salary was 800, because that was the latest committed value. The batch update had not committed, so the new salary was invisible to any session but the batch session itself. Since the OLTP application updated the salary as well, it replaced the new salary with the old one.

For details on lost updates, see the documentation on Data Concurrency and Consistency.

Optimistic Locking using column values

To avoid lost updates, the application has to check during its update that someone else has not changed the data in the meantime.

BATCH MILLER
– B1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– B2: MILLER queries SMITH and sees committed data

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 800 20
– B3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369
and sal = 800 and deptno = 20;

– B4: the batch job commits

commit;

 -The application sees “0 rows updated.”
and explains to MILLER that someone else
already changed the data.
– B5: SMITH got his raise!

select empno, sal, deptno from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO
7369 880 20

 

You may be wondering why Step B3 didn’t find the row it wanted to update. That is because of “restartability”:

  • Oracle did a “consistent get” to get the latest committed values, where SAL was 800
  • but then it did a “current get” to apply MILLER’s change to the current version of the data. This is when it blocked, waiting for the batch job to commit and release the lock on that row.
  • When the “current get” finally happened, Oracle noticed that the SAL value had changed, so it restarted the update: it did another consistent get, at which time it found no rows that matched the WHERE clause – so it did not do the update.

For a detailed explanation of restartability, see these posts by Tom Kyte:

http://tkyte.blogspot.fr/2005/08/something-different-part-i-of-iii.html

http://tkyte.blogspot.fr/2005/08/part-ii-seeing-restart.html

http://tkyte.blogspot.fr/2005/09/part-iii-why-is-restart-important-to.html

Why not use ORA_ROWSCN?

Tom discovered that putting ORA_ROWSCN in the WHERE clause did not trigger a restart. In our scenario:

BATCH MILLER
– C1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– C2: MILLER queries SMITH and sees committed data

select empno, sal, deptno, ora_rowscn from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO ORA_ROWSCN
7369 800 20 13857633

 

– C3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369 and ora_rowscn = 13857633;

– C4: the batch job commits

commit;

-The application sees “1 rows updated.”

 

No reason to go any further: the update did not see that the batch job had changed the ORA_ROWSCN.

Making ORA_ROWSCN work!

Here is what I stumbled on: in the WHERE clause, referring to ORA_ROWSCN will not trigger a restart, but referring to SAL, which changed, will trigger a restart – and once the restart is triggered, then the new “consistent get” will see the new value of ORA_ROWSCN. We don’t need the old value of SAL, we just need to mention it. For example:

BATCH MILLER
– D1: SMITH submits a batch job for 10% raises

update emp set sal = sal * 1.1;

– D2: MILLER queries SMITH and sees committed data

select empno, sal, deptno, ora_rowscn from emp
where ename = ‘SMITH’;

EMPNO SAL DEPTNO ORA_ROWSCN
7369 800 20 13865770

 

– D3: MILLER reassigns SMITH: update waits for batch job to commit

update emp set sal = 800, deptno = 30
where empno = 7369
and ora_rowscn = 13865770
and coalesce(‘*’,sal||deptno) is not null;

– D4: the batch job commits

commit;

 -The application sees “0 rows updated.”
and explains to MILLER that someone else
already changed the data.

 

I decided to use COALESCE to refer to the column values because it uses short-circuit evaluation. Once it determines that ‘*’ is not null, it doesn’t bother evaluating the concatenation of the columns. Also, concatenation implicitly converts every datatype to VARCHAR2 so I can list strings, dates, timestamps, numbers and intervals without worry.

Conclusion

ORA_ROWSCN can indeed be used as a “version number” that can be checked to avoid lost updates. The UPDATE statement just has to list the appropriate columns in the WHERE clause, but it doesn’t have to check them against values from a prior SELECT statement. The only real value that needs to be checked is ORA_ROWSCN.

There are some limitations and caveats here:

  1. If your table has not been created with ROWDEPENDENCIES, ORA_ROWSCN will change every time any row in the block has changed. The table has to be “recreated” with ROWDEPENDENCIES for this to work.
  2. You cannot use this method with Index Organized Tables! If you create such a table with ROWDEPENDENCIES, then SELECT ORA_ROWSCN… will return NULL.
  3. You might wonder whether a “delayed block cleanout” will cause the ORA_ROWSCN to change without the data having changed. I don’t see how that could be a problem, since the SELECT done to get the data (and the ORA_ROWSCN) will do the cleanout as part of its processing.

#DOAG2016: Ranges, Ranges Everywhere!

Today I did my second presentation at DOAG2016. It was at 9:00 so I got to sleep in ;)

The room was huge but there were enough people that I didn’t feel too lonely.

The room and the technical help were top notch, and again there were questions at just the right time to remind me of things I might have left out!

[Update 2016-11-20: there was a bug on slide 21 ! Instead of “order by end_n” it should be “order by start_n”. I have updated the slide on the DOAG website and on slideshare. My test cases were not thorough; my apologies…]

As promised, I put the presentation on Slideshare. Here is the link:

I will be blogging about some of the content later on, so stay tuned…

As always, please download the slides and play them in Powerpoint so you can see the animations.

Thanks to DOAG for the invite!

#DOAG2016: Advanced Row Pattern Matching

DOAG2016 started today at 8:30, and I did too. There were so many great presentations at the same time as mine, I was surprised and pleased to get a nice audience.

The room and the technical help were top notch, and the questions came at just the right time to remind me of things I might have left out!

As promised, I put the presentation on Slideshare. Here is the link:

http://www.slideshare.net/StewAshton1/advanced-row-pattern-matching

I will be blogging about some of the content later on, so stay tuned…

If you want to start with an “unadvanced” presentation, go here first:

http://www.slideshare.net/StewAshton1/row-pattern-matching-in-oracle-database-12c

As always, please download the slides and play them in Powerpoint so you can see the animations.

Thanks to DOAG for the invite!