Splitting a Table into Rowid Ranges of Equal Size

In the OTN forum, Jonathan Lewis recently asked for an efficient SQL solution to split a table into 12 ROWID ranges having the same (+-1) number of blocks. I’m posting here a slightly cleaned-up version of my answer so I can change it if necessary after the question gets archived.

If you stop at the ANSWER clause, you will see the number of blocks in each bucket.

If you stop at the ROWIDS clause, you will get the ROWID ranges which are the point of the exercise.

The final SELECT is for testing purposes: it compares the number of rows in the table to the sum of the rows in each bucket. Obviously the two numbers had better be equal or else there is a bug. If you find a bug, let me know – gently ;)

[Update 2015-08-15: Jonathan was asking on behalf of Bryn Llewellyn, who later contacted me directly. Thanks to Bryn, I have corrected three bugs:

  1. I was using the OBJECT_ID instead of the DATA_OBJECT_ID: I have no idea why…
  2. I was not handling properly cases where the same table had more than one DATA_OBJECT_ID or more than one FILE_ID.
  3. I was dividing before multiplying, which can lead to rounding problems with floating-point arithmetic.]
define d_owner = 'SYS'  
define d_table_name = 'SOURCE$'
define d_num_buckets = 12

with extents_data as (
  select o.data_object_id, e.file_id, e.block_id, e.blocks
  from dba_extents e
  join all_objects o
  on (e.owner, e.segment_name, e.segment_type) = ((o.owner, o.object_name, o.object_type))
    and decode(e.partition_name, o.subobject_name, 0, 1) = 0
  where e.segment_type in ('TABLE', 'TABLE PARTITION', 'TABLE SUBPARTITION')
    and e.owner = '&d_owner'
    and e.segment_name = '&d_table_name'
, extents_with_sums as (   
  select sum(blocks) over() total_blocks,
    sum(blocks) over(order by data_object_id, file_id, block_id) - blocks cumul_prev_blocks,   
    sum(blocks) over(order by data_object_id, file_id, block_id) cumul_current_blocks,
  from extents_data e 
, extents_with_buckets as (   
  select width_bucket(cumul_prev_blocks, 1, total_blocks + 1, &d_num_buckets) prev_bucket,
    width_bucket(cumul_prev_blocks+1, 1, total_blocks + 1, &d_num_buckets) first_bucket,
    width_bucket(cumul_current_blocks, 1, total_blocks + 1, &d_num_buckets) last_bucket,
  from extents_with_sums e   
, selected_extents as (
  select *
  from extents_with_buckets
  where cumul_current_blocks = round((last_bucket * total_blocks) / &d_num_buckets)
    or prev_bucket < last_bucket
, expanded_extents as (   
  select first_bucket + level - 1 bucket,
    case level when 1 then cumul_prev_blocks
      else round(((first_bucket + level - 2) * total_blocks) / &d_num_buckets) 
    end start_blocks,   
    case first_bucket + level - 1 when last_bucket then cumul_current_blocks - 1
      else round(((first_bucket + level - 1) * total_blocks) / &d_num_buckets) - 1
    end end_blocks,
  from selected_extents e
  connect by cumul_prev_blocks = prior cumul_prev_blocks   
    and first_bucket + level -1 <= last_bucket   
    and prior sys_guid() is not null
, answer as ( 
  select bucket,
      keep (dense_rank first order by cumul_prev_blocks) first_data_object_id,
      keep (dense_rank first order by cumul_prev_blocks) first_file_id,  
    min(block_id + start_blocks - cumul_prev_blocks)   
      keep (dense_rank first order by cumul_prev_blocks) first_block_id,
      keep (dense_rank last order by cumul_prev_blocks) last_data_object_id,
      keep (dense_rank last order by cumul_prev_blocks) last_file_id,  
    max(block_id + end_blocks - cumul_prev_blocks)   
      keep (dense_rank last order by cumul_prev_blocks) last_block_id,  
    max(end_blocks) + 1 - min(start_blocks) blocks   
  from expanded_extents   
  group by bucket 
, rowids as (
    1, first_data_object_id, first_file_id, first_block_id, 0
  ) rowid_start,   
    1, last_data_object_id, last_file_id, last_block_id, 32767
  ) rowid_end   
  from answer
  order by bucket
'select count(*) cnt from &d_owner..&d_table_name union all select sum(cnt) from (' txt from dual
union all
select 'select count(*) cnt from &d_owner..&d_table_name where rowid between chartorowid('''
|| rowid_start || ''') and chartorowid(''' || rowid_end || ''')'
|| case when lead(rowid_start) over(order by rowid_start) is null then ');'
  else ' union all'
end test_sql
from rowids;

7 thoughts on “Splitting a Table into Rowid Ranges of Equal Size

  1. Great job Stew .
    My question is , how can we use that ? What is the reason to make such buckets ?
    dbms_parallel fair distribution or what ?

    • Hello GG,

      Anytime you use DBMS_PARALLEL_EXECUTE, you have to break the work up into chunks using either index-based ranges or ROWID ranges. ROWID ranges allow you to do a “partial table scan” which should be more efficient than index-based access.

      There is one reason for DBMS_PARALLEL_EXECUTE that is not mentioned often: remote access to data. If you use native SQL parallelism when accessing a remote database, there will still be only one DBLINK open and it will be a bottleneck. With DBMS_PARALLE_EXECUTE, we can have multiple DBLINKs and potentially go much faster.

      Now the question is, why split the data up into equal size buckets? DBMS_PARALLEL_EXECUTE will create chunks for us, but they will never be bigger than one extent and they will not always be the same size. I frankly don’t know if creating 12, 20 or 100 equal size buckets is better or not. I got interested in the subject because Jonathan Lewis and Bryn Llewellyn asked for volunteers.

      Best regards, Stew

  2. This is excellent.
    Updated your query to serve multiple tables and chunks at once.
    Also there was problem with transportable tablespaces – since they use relative_fno.
    But this does no matter – this idea is great !

  3. Thank you Stew.
    Here is my question. Is it possible to write SQL query to split a table into 12 ROWID ranges having same number of rows?. I’m looking to use this solution extract data from a huge table into number of flat files. I would then use these files to load data into different system using its MPP architecture.

    The one thing i noticed is that, the above solution is able to split into equal number of blocks. That however do not guaranty that those buckets would give same number of rows. Here in my application, the tables are heavily get updated. As a result, each bucket end up having different number of rows.


    • Sure.

      select bucket,
        min(rowid) from_rowid, 
        max(rowid) to_rowid, 
        count(*) cnt
      from (
        select rowid,
          ntile(12) over(order by rowid) bucket 
        from <table>
      group by bucket
      order by bucket;
      select * from <table> 
      where rowid between chartorowid(from_rowid) and chartorowid(to_rowid);
      • This query gave me to better way to split table into files with equal size. Thanks Stew.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s