create or replace package timing_pkg as
start_time pls_integer;
time_taken pls_integer;
procedure set_start_time;
procedure calc_runtime (
operation varchar2
);
end;
/
create or replace package body timing_pkg as
procedure set_start_time as
begin
start_time := dbms_utility.get_time;
end;
procedure calc_runtime (
operation varchar2
) as
begin
time_taken :=
( dbms_utility.get_time - start_time );
dbms_output.put_line ( operation || ' ' || time_taken || ' hundredths of a second' );
end;
end;
/
create table bricks (
brick_id integer
not null
primary key,
colour varchar2(10),
shape varchar2(10),
weight integer
)
;
begin
insert into bricks values ( 1, 'red', 'cylinder', 1 );
insert into bricks values ( 2, 'blue', 'cube', 1 );
insert into bricks values ( 3, 'green', 'cube', 1 );
delete bricks;
commit;
end;
/
truncate table bricks;
create or replace procedure ins_rows ( num_rows int ) as
begin
execute immediate 'truncate table bricks';
dbms_random.seed ( 0 );
insert into bricks
with rws as (
select level x from dual
connect by level <= num_rows
)
select rownum,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
from rws;
end ins_rows;
/
As with selects, you can get execution plans for inserts, updates, and deletes. The process is the same as for queries. Run the statement then call DBMS_XPlan as shown:
insert /*+ gather_plan_statistics */ into bricks
values ( 0, 'red', 'cylinder', 1 );
select *
from table(dbms_xplan.display_cursor(:LIVESQL_LAST_SQL_ID, format => 'IOSTATS LAST'));
insert /*+ gather_plan_statistics */ into bricks
select level, 'red', 'cylinder', 1
from dual
connect by level <= 100;
select *
from table(dbms_xplan.display_cursor(:LIVESQL_LAST_SQL_ID, format => 'IOSTATS LAST'));
update /*+ gather_plan_statistics */ bricks
set shape = 'cube'
where brick_id = 1;
select *
from table(dbms_xplan.display_cursor(:LIVESQL_LAST_SQL_ID, format => 'IOSTATS LAST'));
delete /*+ gather_plan_statistics */ bricks
where brick_id <= 10;
select *
from table(dbms_xplan.display_cursor(:LIVESQL_LAST_SQL_ID, format => 'IOSTATS LAST'));
For INSERT-AS-SELECT, UPDATE, and DELETE review the plan to see if you make changes to help the optimizer find the rows faster.
But even if you have an optimal execution plan for a statement, rewriting DML processes can often give order-of-magnitude performance gains.
Unlike queries, other processes can affect writes. Two common culprits are:
Triggers run extra code when processing your insert, update or delete. You can check if there are any triggers on your tables by querying the *_TRIGGERS views:
select * from user_triggers;
If there are, review the code in the trigger. It may be the case the write is slow because of the work done in the trigger. If this is the case, see if you can change rewrite the process - ideally without using triggers!
Unlike SELECTS, uncommitted changes in other sessions can stop a write completing. Common causes are:
It can be hard to completely avoid this problem. For example, if two users need to update the same rows at the same time, the second must wait until the first to end their transaction. To minimize its effect, ensure you commit or rollback transactions as soon as possible. If these continue to be an issue, you may need to redesign how you process data changes.
This code inserts 10,000 rows in the bricks table:
begin
delete bricks;
for i in 1 .. 10000 loop
insert into bricks
values (
i,
case mod ( i, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( i, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
);
end loop;
end;
/
This means the database has to run the INSERT statement 10,000 times. Although each individual execution is fast, these add up quickly, making the overall process slow. This particularly problematic if the SQL comes from the mid-tier application, as this makes a roundtrip to the database for each row.
If you have a commit inside the loop, the process will be even slower!
You can make this process faster using bulk processing.
To use this, load an array with data. Then use the FORALL statement. This looks like a FOR LOOP. But unlike a loop, it only processes the DML statement inside once:
delete bricks;
select count(*) from bricks;
declare
type bricks_rec is record (
brick_id integer, colour varchar2(10),
shape varchar2(10), weight integer
);
type bricks_array is table of bricks_rec
index by pls_integer;
brick_values bricks_array;
begin
for i in 1 .. 10000 loop
brick_values ( i ) := bricks_rec (
brick_id => i + 20000,
colour => case mod ( i, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
shape => case mod ( i, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
weight => round ( dbms_random.value ( 2, 10 ) )
);
end loop;
forall rws in 1 .. brick_values.count
insert into bricks
values brick_values ( rws );
end;
/
select count(*) from bricks;
You can also execute bulk or batch DML statements from the middle tier in most programming languages. Check the documentation for your technology to find out how to do this.
This compares methods for inserting 100,000 rows:
Due to the shared nature of Live SQL, there can be a lot of variation in run times for the processes in this tutorial. Run the comparisons 3-4 times to get a feel for the relative performance of the methods shown.
declare
num_rows pls_integer := 100000;
type bricks_rec is record (
brick_id integer, colour varchar2(10),
shape varchar2(10), weight integer
);
type bricks_array is table of bricks_rec
index by pls_integer;
brick_values bricks_array;
begin
delete bricks;
commit;
timing_pkg.set_start_time;
for i in 1 .. num_rows loop
insert into bricks
values (
i,
case mod ( i, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( i, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
);
end loop;
timing_pkg.calc_runtime ( 'Insert-loop' );
rollback;
timing_pkg.set_start_time;
for i in 1 .. num_rows loop
brick_values ( i ) := bricks_rec (
brick_id => i,
colour => case mod ( i, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
shape => case mod ( i, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
weight => round ( dbms_random.value ( 2, 10 ) )
);
end loop;
forall rws in 1 .. brick_values.count
insert into bricks
values brick_values ( rws );
timing_pkg.calc_runtime ( 'Insert-forall' );
end;
/
The difference between these methods is large. The FORALL method completes in a few tenths of a second. Single-row inserts in a loop often takes 10x longer!
You can also use INSERT to copy rows from one table to another. This code simulates loading 10,000 from one table to another using single row inserts inside a cursor-for loop:
begin
delete bricks;
for rw in (
with rws as (
select level x from dual
connect by level <= 10000
)
select rownum id,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end colour,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end shape,
round ( dbms_random.value ( 2, 10 ) ) weight
from rws
) loop
insert into bricks
values (
rw.id,
rw.colour,
rw.shape,
rw.weight
);
end loop;
end;
/
As with the previous example, this means the database has to run the INSERT statement 10,000 times.
When changing data, it's faster to run one statement that changes all the rows. Putting DML inside a loop that changes one row on each iteration is a sure way to slow SQL!
Instead, have one SQL statement that inserts all the rows. This uses INSERT-AS-SELECT to add all 10,000 rows in one go:
delete bricks;
insert into bricks
with rws as (
select level x from dual
connect by level <= 10000
)
select rownum,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
from rws;
You can also use bulk processing to copy data. To use this, BULK COLLECT the query into an array. Then use a FORALL INSERT:
declare
type bricks_rec is record (
brick_id integer, colour varchar2(10),
shape varchar2(10), weight integer
);
type bricks_array is table of bricks_rec
index by pls_integer;
brick_values bricks_array;
num_rows pls_integer := 10000;
begin
delete bricks;
with rws as (
select level x from dual
connect by level <= num_rows
)
select rownum,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
bulk collect
into brick_values
from rws;
forall rws in 1 .. brick_values.count
insert into bricks
values brick_values ( rws );
end;
/
In most cases it's quicker and easier to use INSERT-SELECT. Reserve BULK COLLECT ... FORALL for cases where you need to do procedural (non-SQL) processing of the data before loading it into the target table.
This compares methods for copying 100,000 rows from one table to another
declare
num_rows pls_integer := 100000;
type bricks_rec is record (
brick_id integer, colour varchar2(10),
shape varchar2(10), weight integer
);
type bricks_array is table of bricks_rec
index by pls_integer;
brick_values bricks_array;
begin
delete bricks;
commit;
timing_pkg.set_start_time;
for rw in (
with rws as (
select level x from dual
connect by level <= num_rows
)
select rownum id,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
from rws
) loop
insert into bricks
values (
rw.id,
case mod ( rw.id, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rw.id, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
);
end loop;
timing_pkg.calc_runtime ( 'Insert-loop' );
rollback;
timing_pkg.set_start_time;
insert into bricks
with rws as (
select level x from dual
connect by level <= num_rows
)
select rownum,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
from rws;
timing_pkg.calc_runtime ( 'Insert-select' );
rollback;
timing_pkg.set_start_time;
with rws as (
select level x from dual
connect by level <= num_rows
)
select rownum,
case mod ( rownum, 3 )
when 0 then 'red'
when 1 then 'blue'
when 2 then 'green'
end,
case mod ( rownum, 2 )
when 0 then 'cube'
when 1 then 'cylinder'
end,
round ( dbms_random.value ( 2, 10 ) )
bulk collect
into brick_values
from rws;
forall rws in 1 .. brick_values.count
insert into bricks
values brick_values ( rws );
timing_pkg.calc_runtime ( 'Insert-forall' );
end;
/
The single statement can be up to 10x faster than the looped approach; a huge saving! Copying the data using BULK COLLECT then INSERT INTO is also much faster than the loop, though typically marginally slower than plain SQL.
This runs a cursor-for loop to get all the red rows. For each row fetched, it updates its weight to one:
declare
num_rows pls_integer := 10000;
begin
ins_rows ( num_rows );
for rws in (select * from bricks where colour = 'red') loop
update bricks b
set weight = 1
where b.brick_id = rws.brick_id;
end loop;
end;
/
But the loop is unnecessary! Adding a WHERE clause to UPDATE changes all the rows where the conditions are true. Instead of a loop, search for rows where the colour is red:
update bricks
set weight = 1
where colour = 'red' ;
As with INSERT, you can also use FORALL or other batch processing methods to change many rows in one call.
This uses BULK COLLECT to populate an array with BRICK_IDs for the red rows:
declare
num_rows pls_integer := 10000;
rws dbms_sql.number_table;
begin
ins_rows ( num_rows );
select brick_id
bulk collect into rws
from bricks
where colour = 'red';
forall i in 1 .. rws.count
update bricks
set weight = 1
where brick_id = rws (i);
end;
/
Typically you only need to bulk updates when either:
This compares running:
declare
num_rows pls_integer := 100000;
rws dbms_sql.number_table;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
for i in 1 .. num_rows loop
update bricks
set colour = 'pink',
shape = 'cone'
where brick_id = i;
end loop;
timing_pkg.calc_runtime ( 'Update-loop' );
ins_rows ( num_rows );
timing_pkg.set_start_time;
update bricks
set colour = 'pink',
shape = 'cone';
timing_pkg.calc_runtime ( 'Update-all' );
ins_rows ( num_rows );
for i in 1 .. num_rows loop
rws (i) := i;
end loop;
timing_pkg.set_start_time;
forall i in 1 .. rws.count
update bricks
set colour = 'pink',
shape = 'cone'
where brick_id = rws (i);
timing_pkg.calc_runtime ( 'Update-forall' );
end;
/
In this example the single update statement is typically 3-4x faster than looping through 100k rows and updating each one. Bulk processing gives similar performance to plain SQL, though is usually slightly slower.
Finally, DELETE. As with the other DML statements, running a single DELETE removing all the rows is faster than lots of statements removing one at a time.
This removes all the rows with BRICK_IDs between 1 and 1,000 one at a time:
declare
num_rows pls_integer := 10000;
begin
ins_rows ( num_rows );
for rw in 1 .. 1000 loop
delete bricks b
where b.brick_id = rw;
end loop;
end;
/
It's much better to run a single DELETE with a WHERE clause that removes desired rows:
exec ins_rows ( 10000 );
delete bricks b
where b.brick_id between 1 and 1000;
As with UPDATE and INSERT, you can use bulk/batch processing to process many rows in one call.
exec ins_rows ( 10000 );
select count(*) from bricks;
declare
rws dbms_sql.number_table;
begin
for i in 1 .. 1000 loop
rws (i) := i;
end loop;
forall i in 1 .. rws.count
delete bricks
where brick_id = rws (i);
end;
/
select count(*) from bricks;
This compares:
declare
num_rows pls_integer := 100000;
rws dbms_sql.number_table;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
for i in 1 .. num_rows loop
delete bricks
where brick_id = i;
end loop;
timing_pkg.calc_runtime ( 'Delete-loop' );
ins_rows ( num_rows );
timing_pkg.set_start_time;
delete bricks;
timing_pkg.calc_runtime ( 'Delete-all' );
ins_rows ( num_rows );
for i in 1 .. num_rows loop
rws (i) := i;
end loop;
timing_pkg.set_start_time;
forall i in 1 .. rws.count
delete bricks
where brick_id = rws (i);
timing_pkg.calc_runtime ( 'Delete-forall' );
end;
/
Again, the single statement is considerably faster. This time ~2-3x faster than deleting all 100k rows one-by-one. The runtime for bulk deletion typically falls somewhere between the two other methods.
This code loops through all the rows with a weight less than six and sets their weight to one. Run it a few times to get a feel for how long it takes to execute:
declare
num_rows pls_integer := 100000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
for rws in (select * from bricks where weight <= 5) loop
update bricks
set weight = 1
where brick_id = rws.brick_id;
end loop;
timing_pkg.calc_runtime ( 'Update-loop' );
end;
/
Replace /* TODO */ in the code below to turn the update above into one statement:
declare
num_rows pls_integer := 100000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
update bricks
set weight = 1
where /* TODO */ ;
timing_pkg.calc_runtime ( 'Update-all' );
end;
/
How does this affect the runtime of the process?
An UPDATE with a where clause is much faster than looping through rows and running an UPDATE for each row. But the single update can still take a long time to finish. And it locks all the affected rows from the time the UPDATE starts until the transaction finishes.
This could take unacceptably long on large tables. If you're changing most or all of the rows in a table, you can make the process faster doing an "update" in DDL with this process:
It's rare you need to use this trick. Reserve this for one-off migrations or other cases where speed is of the essence.
For example, this creates BRICKS_UPDATE, "updating" the colour and shape of every row to yellow prism by selecting these values instead of the columns in the table:
exec ins_rows ( 100000 );
create table bricks_update (
brick_id primary key,
colour, shape, weight
) as
select brick_id,
cast ( 'yellow'as varchar2(10) ) colour,
cast ( 'prism' as varchar2(10) ) shape,
weight
from bricks;
Note that when selecting literal values, you need to CAST them to ensure the new table has the same data types as the original.
To complete the "update", you need to:
Adding the dependent objects to the new table - particularly indexes - can take a while. Be sure to measure the total time to complete the process!
This completes the process by dropping the original table, then renaming the new one:
drop table bricks purge;
rename bricks_update to bricks;
This destroys your rollback position! If there's an error in the process, you'll have to restore from backup. It's safer to rename the original table (e.g. BRICKS_OLD), then rename the new table.
Whichever method you use to swap the tables, there will be a brief period where there is no BRICKS table! You need to take the application(s) offline to complete this process safely.
This compares the run time of an update changing the shape and colour of every row to doing the "update" using CREATE-TABLE-AS-SELECT and swapping the tables over:
declare
num_rows pls_integer := 100000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
update bricks
set colour = 'yellow',
shape = 'prism';
timing_pkg.calc_runtime ( 'Update-all' );
ins_rows ( num_rows );
timing_pkg.set_start_time;
execute immediate q'!
create table bricks_update (
brick_id primary key,
colour, shape, weight
) as
select brick_id,
cast ( 'yellow'as varchar2(10) ) colour,
cast ( 'prism' as varchar2(10) ) shape,
weight
from bricks!';
execute immediate 'drop table bricks purge';
execute immediate 'rename bricks_update to bricks';
timing_pkg.calc_runtime ( 'Update-ctas' );
end;
/
The speed gains of using a "DDL update" are small in this case. Given the extra complexity and risks of using this, it's only worth considering for when changing millions of rows or more. In most cases UPDATE is safer, easier, and fast enough.
You can also make deletion processes faster by changing DML to DDL. The easiest case is when you remove all the rows from a table.
Instead of a DELETE without a WHERE clause, use TRUNCATE. This is a meta-data only operation, so is "instant":
truncate table bricks;
This gives big performance gains, as this test shows:
declare
num_rows pls_integer := 100000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
delete bricks;
timing_pkg.calc_runtime ( 'Delete-all' );
ins_rows ( num_rows );
timing_pkg.set_start_time;
execute immediate 'truncate table bricks';
timing_pkg.calc_runtime ( 'Truncate' );
end;
/
But TRUNCATE commits, so you can't roll it back. And it has some restrictions that don't apply to DELETE. So this is not always appropriate.
And it's rare you want to remove all the rows in a table. Even when archiving most of the data, usually you want to keep some rows. There are several other DDL tricks you can use to do this.
As with UPDATE, there are DDL tricks you can use to "delete" data. These are:
With create-table-as-select, write a query fetching the rows you want to keep. Then wrap this in a create-table statement:
exec ins_rows ( 10000 );
select count (*) from bricks;
create table bricks_keep (
brick_id primary key,
colour, shape, weight
) as
select * from bricks
where brick_id > 9500;
select count (*) from bricks_keep;
From here, you can either switch just the rows over by truncating the original table and re-inserting the rows:
truncate table bricks;
insert into bricks
select * from bricks_keep;
Or you could switch the tables themselves over by dropping or renaming the original table. Then renaming the new table to the old.
As with UPDATE, if you switch the tables over, you also need to copy any index, constraints, etc. from the old table to the new. Ensure you test the runtime of the complete process!
Both of these methods need an outage complete safely. Making them unusable except in extreme cases. Fortunately, Oracle Database 12.2 added an online DDL method to remove lots of data: a filtered table move.
This added the "including rows" clause to ALTER TABLE MOVE. You place a where clause after this to state the rows you want to keep. The database discards the non-matching rows in the process.
exec ins_rows ( 10000 );
select count(*) from bricks;
alter table bricks
move including rows
where brick_id > 9500;
select count(*) from bricks;
This compares the relative performance of these methods and a regular DELETE to remove 90% of the rows from a table:
drop table bricks_keep purge;
declare
num_rows pls_integer := 100000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
delete bricks
where brick_id <= num_rows * 0.9;
timing_pkg.calc_runtime ( 'Delete-where' );
ins_rows ( num_rows );
timing_pkg.set_start_time;
execute immediate '
create table bricks_keep (
brick_id primary key,
colour, shape, weight
) as
select * from bricks
where brick_id > ' || ( num_rows * 0.9 );
execute immediate 'truncate table bricks';
execute immediate 'insert into bricks
select * from bricks_keep';
timing_pkg.calc_runtime ( 'Delete-ctas-swap-rows' );
execute immediate 'drop table bricks_keep purge';
ins_rows ( num_rows );
timing_pkg.set_start_time;
execute immediate '
create table bricks_keep (
brick_id primary key,
colour, shape, weight
) as
select * from bricks
where brick_id > ' || ( num_rows * 0.9 );
execute immediate 'drop table bricks purge';
execute immediate 'rename bricks_keep to bricks';
timing_pkg.calc_runtime ( 'Delete-ctas-swap-table' );
ins_rows ( num_rows );
timing_pkg.set_start_time;
execute immediate '
alter table bricks
move including rows
where brick_id > ' || ( num_rows * 0.9 );
timing_pkg.calc_runtime ( 'Delete-move' );
end;
/
All are notably faster than a regular DELETE. This data set is small, so there's little performance difference between them.
These methods are most effective when removing a large fraction (50%+) of the rows from a table. But can be slower than a plain delete when removing a large absolute number of rows (millions or more) that are only a small fraction of a table. This kind of operation is common when archiving data. Such as removing the oldest month of data.
When doing this, partitioning is a great way to remove large numbers of rows fast.
Partitioning a table splits it up into smaller sub-tables. You can do operations that affect all the rows in a partition, leaving the other partitions unaffected. For example, dropping a partition.
This syntax (added in 12.2) changes the table to be partitioned into batches of 10,000 rows:
alter table bricks
modify partition by range ( brick_id )
interval ( 10000 ) (
partition p0 values less than ( 10001 ),
partition p1 values less than ( 20001 )
) online;
You can now remove all the rows in a partition by dropping or truncating it. This truncates the first partition:
exec ins_rows ( 30000 );
select count(*) from bricks;
alter table bricks
truncate partition p1;
select count(*) from bricks;
Partitioning a table on insert date is a common strategy. This makes it fast and easy to remove all the rows added before a certain date.
Remember partitioning a table impacts all statements you run against it. Operations that read many partitions may be slower compared to the equivalent non-partitioned table. Ensure you test whole application workload before diving in with partitioning!
This compares removing 20,000 rows with DELETE to truncating the first two partitions:
declare
num_rows pls_integer := 100000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
delete bricks
where brick_id <= 20000;
timing_pkg.calc_runtime ( 'Delete-where' );
timing_pkg.set_start_time;
execute immediate '
alter table bricks
truncate partition p0, p1
';
timing_pkg.calc_runtime ( 'Truncate-partition' );
end;
/
Note: 20,000 rows is tiny in modern database terms. You may notice little or no performance difference between the DELETE and truncating the partitions in the above process.
This process starts with 50,000 rows in BRICKS. It the loops through data to:
declare
num_rows pls_integer := 50000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
for rws in (
select level id from dual
connect by level <= 20000
) loop
insert into bricks ( brick_id, colour, shape, weight )
values ( rws.id + num_rows, 'red', 'cube', 1 );
end loop;
for rws in (
select brick_id from bricks where colour = 'blue'
) loop
update bricks b
set weight = 2
where b.brick_id = rws.brick_id;
end loop;
for rws in (
select brick_id from bricks where colour = 'green'
) loop
delete bricks b
where b.brick_id = rws.brick_id;
end loop;
timing_pkg.calc_runtime ( 'Looping' );
end;
/
Run it a few times to see how long it takes. Then rewrite the process using the template below to make this faster:
declare
num_rows pls_integer := 50000;
begin
ins_rows ( num_rows );
timing_pkg.set_start_time;
insert into bricks /* TODO */;
update bricks b /* TODO */;
delete bricks b /* TODO */;
timing_pkg.calc_runtime ( 'Optimized' );
end;
/
How fast can you get the process to complete?