Optimistic Locking: One SCN to rule them all

Previously I showed how to avoid lost updates with ORA_ROWSCN. Now let’s create an API that avoids lost updates with just one SCN.

What kind of API?

A transaction consists of one or more changes to data that should happen together: either all should happen or none. When the transaction commits, all the changes happen; when the transaction rolls back, none of them happen. This is the “atomic” part of ACID, the set of properties of database transactions.

A “statement”, be it SQL or a call to the PL/SQL engine, is also atomic:

  • When a “client” calls the database and the call returns with an exception, then any changes made during the call are automatically rolled back. All the data is exactly what it was before the call was made.
  • Of course, when everything goes well the changes are still there.
  • In both cases, the client must either COMMIT or ROLLBACK whatever changes remain that are within the scope of the transaction.

This “statement level atomicity” is guaranteed as long as no COMMITs or ROLLBACKS are executed during the call, and as long as the exception is allowed to return to the client.

So there are two kinds of atomicity we can leverage: at the statement level and the transaction level.

Either way, when a client calls an “atomic API”, it must provide all the input necessary to make all the changes that form a logical unit. If the client has to make more than one call, then the API is obviously not “atomic”.

How does optimistic locking fit in?

In order to do optimistic locking, there is a two-step process:

  1. The application gets data from the “old” rows.
    Since the rows must be checked later for changes, row-based version numbers or checksums may accompany the data.
  2. The application sends the “new” data back, together with whatever “old” data is needed to check the rows for changes, and the checking is executed within the UPDATE statement.

I suggest that for each API that updates the database, we should provide one API that provides the “old” data in the first place.

Getting the “old” data – and one SCN

Ideally, the “old” data should be read consistent: all the data should have existed at the same point in time in the database. Oracle guarantees this consistency within one SQL query, but it does not normally guarantee it across queries. To get read consistency across queries, use a “read-only transaction” or flashback queries.

However we do it, if we get read consistent data we can also get the SCN used by our query or queries. If any of the rows change afterwards, the ORA_ROWSCN of those rows must be greater than that SCN. We can use the SCN like a universal version number to check for any changes to any of the rows we queried.

Example: getting the data

I’m going to demonstrate using the 12cR2 SQL/JSON functions, because they allow me to get all the data I need for one logical unit in a single SQL statement, even if the rows involved are not related and have different columns.

In my stupid example, I assume a user wants to manually update the departments in the DEPT table: the DNAME and LOC columns should resemble the DEPARTMENT_NAME and CITY columns from the DEPARTMENTS and LOCATIONS tables in the HR schema.

with json_departments as(
    select json_arrayagg(
            'DEPARTMENT_ID' value DEPARTMENT_ID,
            'CITY' value CITY,
            'COUNTRY_ID' value COUNTRY_ID
        order by DEPARTMENT_ID
    ) departments
    from hr.departments join hr.locations using(location_id)
    where DEPARTMENT_ID <= 40
, json_depts as (
    select json_arrayagg(
            'DEPTNO' value DEPTNO,
            'DNAME' value DNAME,
            'LOC' value LOC
        order by deptno
    ) depts
    from dept
select json_object(
    'SCN' value timestamp_to_scn(systimestamp),
    'DEPARTMENTS' value departments,
    'DEPTS' value depts
) json_output
from json_departments, json_depts;
  "SCN": 14080746,
      "DEPARTMENT_ID": 10,
      "DEPARTMENT_NAME": "Administration",
      "CITY": "Seattle",
      "STATE_PROVINCE": "Washington",
      "COUNTRY_ID": "US"
      "DEPARTMENT_ID": 20,
      "DEPARTMENT_NAME": "Marketing",
      "CITY": "Toronto",
      "STATE_PROVINCE": "Ontario",
      "COUNTRY_ID": "CA"
      "DEPARTMENT_ID": 30,
      "DEPARTMENT_NAME": "Purchasing",
      "CITY": "Seattle",
      "STATE_PROVINCE": "Washington",
      "COUNTRY_ID": "US"
      "DEPARTMENT_ID": 40,
      "DEPARTMENT_NAME": "Human Resources",
      "CITY": "London",
      "STATE_PROVINCE": null,
      "COUNTRY_ID": "UK"
  "DEPTS": [
      "DEPTNO": 10,
      "LOC": "NEW YORK"
      "DEPTNO": 20,
      "DNAME": "RESEARCH",
      "LOC": "DALLAS"
      "DEPTNO": 30,
      "DNAME": "SALES",
      "LOC": "CHICAGO"
      "DEPTNO": 40,
      "LOC": "BOSTON"

I used timestamp_to_scn(systimestamp) instead of accessing V$DATABASE or using DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER because the latter two require that privileges be granted. I suggest granting the necessary privileges. Notice that the data from the two tables are decorrelated.

Example: the update API

Now that the user has decided on the changes, the application calls the API with more JSON:


Notice that the only extra data required for optimistic locking is that one SCN.

Now here is the API that converts the JSON to relational data and updates DEPT:

create or replace procedure dept_xapi(p_json_input varchar2) as
    cursor c_input is select 0 SCN, deptno, dname, loc from dept;
    type tt_dept is table of c_input%rowtype;
    lt_dept tt_dept;
    l_rowcount number;
    select SCN, deptno, dname, loc bulk collect into lt_dept
    from json_table(p_json_input, '$'
            action path '$.Action',
            scn number path '$.SCN',
            nested path '$.DEPTS[*]'
                deptno number path '$.DEPTNO',
                dname path '$.DNAME',
                loc path '$.LOC'
    where action = 'Update';
    forall i in 1..lt_dept.count
        update dept set dname = lt_dept(i).dname, loc = lt_dept(i).loc
            where deptno = lt_dept(i).deptno
            and ora_rowscn <= lt_dept(i).scn
            and coalesce('@',dname||loc) is not null;
    for i in 1..lt_dept.count loop
        if sql%bulk_rowcount(i) = 0 then
                'DEPT row with DEPTNO = ' || lt_dept(i).deptno||' already changed by another user. No updates have been made.');
        end if;
    end loop;
end dept_xapi;

…and to call the API:



The API as I have coded it is atomic: it either does all the updates or does none of them – as long as it is called from a client. If it is called from other PL/SQL code that does explicit COMMITs or ROLLBACKs, or does not let the exception go back to the client, then the atomicity does not operate.

In order to fit into the “SmartDB” paradigm as promoted by Toon Koppelaars and Bryn Llewellyn, this code would not be called directly from the client. Rather, it would be called by another API that would explicitly COMMIT or ROLLBACK before returning to the client.

Whichever path you choose, the client must receive read-consistent data, and provide all the data for the related updates in one call. To avoid lost updates, just add one SCN


3 thoughts on “Optimistic Locking: One SCN to rule them all

  1. Pingback: Avoiding Lost Updates: Protecting Data in a Multi-user environment – Jeff Kemp on Oracle

  2. Sorry for the probably naive question but wouldm’t it be at least as simple as using an scn based on rowdependencies to use a “user-defined” scn (e.g. timestamp) in some old fashioned way instead as an extra column to achive the same goal?

    • Hi Chris and thanks for your interest.

      There are two times involved: the time you insert or update the row, and the time you commit.

      The user-defined timestamp refers to the first time. The SCN corresponds to the second time – the moment of the commit.

      To update, the user needs to query the data first, then update it. The query or queries should provide read consistent data, and read consistency is always associated with an SCN, not a user-defined timestamp. Returning the same SCN with the data to be updated allows the API to check for any and all changes to the rows involved, using just that one SCN.

      If you go the timestamp route, the “query” API has to send all the timestamps to the user, and the user has to send them all back to the “update” API. If you try to use just one timestamp, then any changes made by someone else before that time but committed after that time could “get lost”.

      Best regards, Stew

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s