Splitting a Table into Rowid Ranges of Equal Size

In the OTN forum, Jonathan Lewis recently asked for an efficient SQL solution to split a table into 12 ROWID ranges having the same (+-1) number of blocks. I’m posting here a slightly cleaned-up version of my answer so I can change it if necessary after the question gets archived.

If you stop at the ANSWER clause, you will see the number of blocks in each bucket.

If you stop at the ROWIDS clause, you will get the ROWID ranges which are the point of the exercise.

The final SELECT is for testing purposes: it compares the number of rows in the table to the sum of the rows in each bucket. Obviously the two numbers had better be equal or else there is a bug. If you find a bug, let me know – gently ;)

[Update 2015-08-15: Jonathan was asking on behalf of Bryn Llewellyn, who later contacted me directly. Thanks to Bryn, I have corrected three bugs:

  1. I was using the OBJECT_ID instead of the DATA_OBJECT_ID: I have no idea why…
  2. I was not handling properly cases where the same table had more than one DATA_OBJECT_ID or more than one FILE_ID.
  3. I was dividing before multiplying, which can lead to rounding problems with floating-point arithmetic.]
define d_owner = 'SYS'  
define d_table_name = 'SOURCE$'
define d_num_buckets = 12

with extents_data as (
  select o.data_object_id, e.file_id, e.block_id, e.blocks
  from dba_extents e
  join all_objects o
  on (e.owner, e.segment_name, e.segment_type) = ((o.owner, o.object_name, o.object_type))
    and decode(e.partition_name, o.subobject_name, 0, 1) = 0
  where e.segment_type in ('TABLE', 'TABLE PARTITION', 'TABLE SUBPARTITION')
    and e.owner = '&d_owner'
    and e.segment_name = '&d_table_name'
)
, extents_with_sums as (   
  select sum(blocks) over() total_blocks,
    sum(blocks) over(order by data_object_id, file_id, block_id) - blocks cumul_prev_blocks,   
    sum(blocks) over(order by data_object_id, file_id, block_id) cumul_current_blocks,
    e.*
  from extents_data e 
)
, extents_with_buckets as (   
  select width_bucket(cumul_prev_blocks, 1, total_blocks + 1, &d_num_buckets) prev_bucket,
    width_bucket(cumul_prev_blocks+1, 1, total_blocks + 1, &d_num_buckets) first_bucket,
    width_bucket(cumul_current_blocks, 1, total_blocks + 1, &d_num_buckets) last_bucket,
    e.*
  from extents_with_sums e   
)
, selected_extents as (
  select *
  from extents_with_buckets
  where cumul_current_blocks = round((last_bucket * total_blocks) / &d_num_buckets)
    or prev_bucket < last_bucket
)
, expanded_extents as (   
  select first_bucket + level - 1 bucket,
    case level when 1 then cumul_prev_blocks
      else round(((first_bucket + level - 2) * total_blocks) / &d_num_buckets) 
    end start_blocks,   
    case first_bucket + level - 1 when last_bucket then cumul_current_blocks - 1
      else round(((first_bucket + level - 1) * total_blocks) / &d_num_buckets) - 1
    end end_blocks,
    e.*
  from selected_extents e
  connect by cumul_prev_blocks = prior cumul_prev_blocks   
    and first_bucket + level -1 <= last_bucket   
    and prior sys_guid() is not null
)
, answer as ( 
  select bucket,
    min(data_object_id)
      keep (dense_rank first order by cumul_prev_blocks) first_data_object_id,
    min(file_id)
      keep (dense_rank first order by cumul_prev_blocks) first_file_id,  
    min(block_id + start_blocks - cumul_prev_blocks)   
      keep (dense_rank first order by cumul_prev_blocks) first_block_id,
    max(data_object_id)
      keep (dense_rank last order by cumul_prev_blocks) last_data_object_id,
    max(file_id)
      keep (dense_rank last order by cumul_prev_blocks) last_file_id,  
    max(block_id + end_blocks - cumul_prev_blocks)   
      keep (dense_rank last order by cumul_prev_blocks) last_block_id,  
    max(end_blocks) + 1 - min(start_blocks) blocks   
  from expanded_extents   
  group by bucket 
)
, rowids as (
  select
  dbms_rowid.rowid_create(
    1, first_data_object_id, first_file_id, first_block_id, 0
  ) rowid_start,   
  dbms_rowid.rowid_create(
    1, last_data_object_id, last_file_id, last_block_id, 32767
  ) rowid_end   
  from answer
  order by bucket
)
select
'select count(*) cnt from &d_owner..&d_table_name union all select sum(cnt) from (' txt from dual
union all
select 'select count(*) cnt from &d_owner..&d_table_name where rowid between chartorowid('''
|| rowid_start || ''') and chartorowid(''' || rowid_end || ''')'
|| case when lead(rowid_start) over(order by rowid_start) is null then ');'
  else ' union all'
end test_sql
from rowids;
Advertisement

17 thoughts on “Splitting a Table into Rowid Ranges of Equal Size

  1. Great job Stew .
    My question is , how can we use that ? What is the reason to make such buckets ?
    dbms_parallel fair distribution or what ?
    Regards
    GG

    • Hello GG,

      Anytime you use DBMS_PARALLEL_EXECUTE, you have to break the work up into chunks using either index-based ranges or ROWID ranges. ROWID ranges allow you to do a “partial table scan” which should be more efficient than index-based access.

      There is one reason for DBMS_PARALLEL_EXECUTE that is not mentioned often: remote access to data. If you use native SQL parallelism when accessing a remote database, there will still be only one DBLINK open and it will be a bottleneck. With DBMS_PARALLE_EXECUTE, we can have multiple DBLINKs and potentially go much faster.

      Now the question is, why split the data up into equal size buckets? DBMS_PARALLEL_EXECUTE will create chunks for us, but they will never be bigger than one extent and they will not always be the same size. I frankly don’t know if creating 12, 20 or 100 equal size buckets is better or not. I got interested in the subject because Jonathan Lewis and Bryn Llewellyn asked for volunteers.

      Best regards, Stew

  2. This is excellent.
    Updated your query to serve multiple tables and chunks at once.
    Also there was problem with transportable tablespaces – since they use relative_fno.
    But this does no matter – this idea is great !

  3. Thank you Stew.
    Here is my question. Is it possible to write SQL query to split a table into 12 ROWID ranges having same number of rows?. I’m looking to use this solution extract data from a huge table into number of flat files. I would then use these files to load data into different system using its MPP architecture.

    The one thing i noticed is that, the above solution is able to split into equal number of blocks. That however do not guaranty that those buckets would give same number of rows. Here in my application, the tables are heavily get updated. As a result, each bucket end up having different number of rows.

    Regards,
    Sreenivas

    • Sure.

      select bucket,
        min(rowid) from_rowid, 
        max(rowid) to_rowid, 
        count(*) cnt
      from (
        select rowid,
          ntile(12) over(order by rowid) bucket 
        from <table>
      )
      group by bucket
      order by bucket;
      
      select * from <table> 
      where rowid between chartorowid(from_rowid) and chartorowid(to_rowid);
      • This query gave me to better way to split table into files with equal size. Thanks Stew.

    • Hello Kannan,

      I improved the code and my explanation in a more recent series of blog posts. See for example

      https://stewashton.wordpress.com/2016/02/08/chunking-tables-6-join-solution/

      which presents a complete solution and gives you a list of all the posts.

      The solution as written does not work for IOTs:
      – It looks for segment_type like ‘TABLE%’, whereas an IOT resides in a segment of type ‘INDEX’
      – It looks for a segment name equal to the table name, whereas an IOT has a segment name generated by the system
      – It uses normal ROWIDs, whereas IOTs have a different structure.

      I suppose an IOT version could be written, but I am not 100% certain.

      Best regards, Stew

  4. The script works great, thank you very much!
    We needed to migrate a non-partitioned table 1.5 TB in size, and dbms_parallel_execute created many more chunks than we wanted. This script saved us, it returns the amount of chunks we need.

  5. Hi, thanks for your post. I’m attempting to find a faster way for splits across dblink than using ntile because our tables are quite large and running the ntile takes a very long time. Your version runs well enough, however the rowid’s returned for the table do not seem valid –

    select count(*) cnt from FOO.BAR union all select sum(cnt) from (
    select count(*) cnt from FOO.BAR where rowid between chartorowid(‘AABTipAAGACYGKgAAA’) and chartorowid(‘AABTipAAGACrbUPH//’) union all
    select count(*) cnt from FOO.BAR where rowid between chartorowid(‘AABTipAAGACrbUQAAA’) and chartorowid(‘AABTipAAGACsClfH//’) union all
    select count(*) cnt from FOO.BAR where rowid between chartorowid(‘AABTipAAGACsClgAAA’) and chartorowid(‘AABTipAAGACsh5PH//’) union all
    select count(*) cnt from FOO.BAR where rowid between chartorowid(‘AABTipAAGACsh5QAAA’) and chartorowid(‘AABTipAAGACtDafH//’));

    CNT
    ———-
    48945014
    0

    2 rows selected.

    Any suggestions to troubleshoot why the rowid’s returned by the function do not seem to match for the table ? Thanks!

      • Here is an example of what I’m seeing on this –

        –min/max rowid from table
        select min(rowid), max(rowid) from foo_bar;
        AAQtdoAAMAANU4kAAA AAQtdoACkAAOJZqADB

        –pull first few example rowid’s from table
        select rowid as row_id from foo_bar order by rowid;
        AAQtdoAAMAANU4kAAA
        AAQtdoAAMAANU4kAAB
        AAQtdoAAMAANU4kAAC
        AAQtdoAAMAANU4kAAD
        AAQtdoAAMAANU4kAAE
        AAQtdoAAMAANU4kAAF
        AAQtdoAAMAANU4kAAG
        AAQtdoAAMAANU4kAAH
        AAQtdoAAMAANU4kAAI

        –Using ntile for splits – these ranges are valid for table
        select grp, min(rid), max(rid) from
        (
        select rowid as rid, ntile(4) over (order by rowid) grp from foo_bar
        )
        group by grp
        order by grp;

        1 AAQtdoAAMAANU4kAAA AAQtdoAAMAAN6ukART
        2 AAQtdoAAMAAN6ukARU AAQtdoACkAAKIHfAf9
        3 AAQtdoACkAAKIHfAf+ AAQtdoACkAAMrCkAKs
        4 AAQtdoACkAAMrCkAKt AAQtdoACkAAOJZqADB

        — Splits using the blocks method SQL from https://stewashton.wordpress.com/2016/02/08/chunking-tables-6-join-solution/
        — these rowid values are not valid for the table
        1 AAQtdoAAGADNU4gAAA AAQtdoAAGADN6ofH//
        2 AAQtdoAAGADN6sgAAA AAQtdoAAGApKIEfH//
        4 AAQtdoAAGApMrCgAAA AAQtdoAAGApOJZ/H//
        3 AAQtdoAAGApKIHAAAA AAQtdoAAGApMq8/H//

        Thanks in advance for your help!

      • Michael,

        It’s hard to debug without being able to reproduce, but I’ll try to help.

        First of all, could you confirm that these are ordinary heap tables – that is, not clustered and not index organized? Also, are they partitioned?

        What is your database character set?

        I’ll look some more if I think of something.

        Regards, Stew

  6. Hi Stew,

    Just curious, I have created the rowid boundaries using your method, but i tried to get size (sum of each rowid ranges for a LOB table / tables), its not giving LOB size as well, Is there a way i can add LOB size on those rowid ranges ? or Is there a way to create rowid ranges for LOB segments ?

    Regards
    Kannan

    • Hi Kannan,

      Actual LOB values are stored either in the table row (inline) or outside of the table row (out-of-line). Inline LOBs must be smaller than 4000 bytes. Out-of-line LOBs are stored in their own segments and they do not have rowids.

      Best regards, Stew

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s