Skip to main content

LOG ERRORS: Suppress row-level errors in DML

Use LOG ERRORS to suppress row-level errors from within the SQL engine when executing non-query DML statements (like inserts, updates and deletes). Instead of raising an error, the SQL engine will insert a row into your error log table - which you then must, really must check after statement execution to see if there were any problems.

Or you will be hiding under the pillows, and your users will really not appreciate that.

In contrast, if you use SAVE EXCEPTIONS with FORALL, you will suppress statement-level errors, but all changes made to rows identified by that statement are rolled back.

Suppose I execute the following statements:
CREATE TABLE plch_employees
(
   employee_id   INTEGER PRIMARY KEY,
   last_name     VARCHAR2 (100),
   salary        NUMBER (3)
)
/

BEGIN
   INSERT INTO plch_employees
        VALUES (100, 'Sumac', 100);

   INSERT INTO plch_employees
        VALUES (200, 'Birch', 50);

   INSERT INTO plch_employees
        VALUES (300, 'Alder', 200);

   COMMIT;
END;
/

BEGIN
   DBMS_ERRLOG.create_error_log (dml_table_name => 'PLCH_EMPLOYEES');
END;
/
I then write the following incomplete block:
DECLARE
   TYPE two_vals_rt IS RECORD
   (
      lowval   NUMBER,
      hival    NUMBER
   );

   TYPE ids_t IS TABLE OF two_vals_rt;

   l_ids     ids_t := ids_t (NULL, NULL);

   l_total   NUMBER;

   PROCEDURE show_sum IS
   BEGIN
      SELECT SUM (salary) INTO l_total FROM plch_employees;
      DBMS_OUTPUT.put_line (l_total);
   END;
BEGIN
   l_ids (1).lowval := 290;
   l_ids (1).hival := 500;
   l_ids (2).lowval := 75;
   l_ids (2).hival := 275;

##REPLACE##

   show_sum;
EXCEPTION 
   WHEN OTHERS 
   THEN 
       show_sum;
END; 
/
Which of the choices provide a replacement for ##REPLACE## so that after the resulting block executes, "800" is displayed?





When you execute a non-SELECT DML statement against a table and an error occurs, the statement is terminated and rolled back in its entirety. This can be wasteful of time and system resources. You can avoid this situation by using the DML error logging feature.


To use DML error logging, you add a LOG ERRORS statement clause that specifies the name of an error logging table into which the database records errors encountered during DML operations. When you add this error logging clause, certain types of errors no longer terminate and roll back the statement. Instead, each error is logged and the statement continues. Afterwards, you can query the contents of the table to either move your error information into the application log or take corrective action on the rows that caused problems.

The error logging clause allows you to specify an upper limit for the number of errors that are allowed to be logged for that statement. If this number is exceeded, then the statement terminates with an error and all its changes are rolled back. Alternatively, you can specify REJECT LIMIT UNLIMITED to allow any number of errors to be logged.

If Oracle is able to log all the eventual errors without exceeding the limit allowed, the statement can succeed and the changes that were successful are not rolled back.

The error logging is always performed as an autonomous transaction, so that the logged errors are not rolled back when a DML statement fails and/or is rolled back, thus allowing them to be used for checking and error correction.

When is it a good idea to implement error log tables?

Certainly in the development and test phases of a project, they are an excellent mechanism for catching issues that were not captured as part of the application requirements.

In addition, if you are performing "bulk" changes to a table in which a large number of rows are modified by a single DML statement, you may not want to "lose" all changes to rows when a change to one fails. In this case, LOG ERRORS is the only way to go.

Are error log tables needed in production?

Can you safely say that all accepted requirements were correctly implemented, that the requirements are complete and that nothing was left out? If the answer is Yes, then you don't need those tables (nothing can go wrong). If the answer is No, they may help you debug issues as they pop up.

You can implement error log tables (or equivalents thereof) as part of instrumentation that documents execution of application logic, and in many cases that is probably the best way to go. But, for tables that are subjected to nothing but simple DML, yet have constraints and therefore the ability to cause errors, error log tables may be the easiest and "cheapest" way of documenting when something didn't work out the way we thought it would.

Comments

Popular posts from this blog

Running out of PGA memory with MULTISET ops? Watch out for DISTINCT!

A PL/SQL team inside Oracle made excellent use of nested tables and MULTISET operators in SQL, blending data in tables with procedurally-generated datasets (nested tables).  All was going well when they hit the dreaded: ORA-04030: out of process memory when trying to allocate 2032 bytes  They asked for my help.  The error occurred on this SELECT: SELECT  *    FROM header_tab trx    WHERE (generated_ntab1 SUBMULTISET OF trx.column_ntab)       AND ((trx.column_ntab MULTISET             EXCEPT DISTINCT generated_ntab2) IS EMPTY) The problem is clearly related to the use of those nested tables. Now, there was clearly sufficient PGA for the nested tables themselves. So the problem was in executing the MULTISET-related functionality. We talked for a bit about dropping the use of nested tables and instead doing everything in SQL, to avoid the PGA error. That would, however require lots of work, revamping algorithms, ensuring correctness, you know the score. Then my eyes snagge

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p