Skip to main content

Implementing Enhanced Table Auditing in the Oracle Dev Gym

I'd like to start off this post by thanking Connor McDonald, a member of my Oracle Developer Advocates team.

I'm a traditional sort of Oracle developer. I learned a long time ago to use row-level triggers to keep track of who created and changed a row and when. 

So, for example, for the DG_CLASSES table of the Oracle Dev Gym (which is used both for Dev Gym classes and AskTOM Office Hours), I have these two triggers:

TRIGGER dg_classes_bir
   ON dg_classes
   :new.created_on := SYSDATE;
   :new.created_by := qdb_config.apex_user;
   :new.changed_on := SYSDATE;
   :new.changed_by := qdb_config.apex_user;
END dg_classes_bir;
TRIGGER dg_classes_bur
   ON dg_classes
   :new.changed_on := SYSDATE;
   :new.changed_by := qdb_config.apex_user;
END dg_classes_bur;

Clearly, this kind of auditing has some drawbacks, including:

  • I have no idea what was changed, just that something was changed.
  • Those triggers fire all the time, so if I wanted to, say, do a batch load of rows and preserve the existing created-changed column values, I have to disable the triggers, do the load, re-enable the triggers.
Connor recently published a blog post and utility for enhanced table auditing. I strongly suggest you read that before continuing. You will understand more thoroughly the drawbacks of my approach. And you will understand why I decided to use his utility in the Oracle Dev Gym. 

I'd like to share how I went about it, including some pages I added to the Dev Gym to give us easy access to the auditing data. Perhaps you'd want to do something similar.

The installation of Connor's code went smoothly (not that I expected anything differently). Then it was time to generate all the code needed for auditing on a select number of key tables. Again, Connor made it really easy by offering a single procedure, , that created all the code for a specific table. 

I built a layer of code over that to handle multiple tables and disable those pesky, out-of-date triggers:
   s          VARCHAR2 (100) := 'QDB_PROD';
   a          VARCHAR2 (100) := 'EXECUTE';

   TYPE tables_t IS TABLE OF VARCHAR2 (1000);
   l_tables tables_t
       := tables_t (
   /* Enable auditing in the Dev Gym schema */
   INSERT INTO schema_list VALUES ('QDB_PROD');

   FOR indx IN 1 .. tables_in.COUNT
      /* Creates audit table, package, triggers */
      audit_util.generate_audit_support (
         p_owner        => schema_in,
         p_table_name   => tables_in (indx),
         p_action       => action_in);
      /* Disable current BUI and BUR triggers. Can delete later. */
         'ALTER TRIGGER ' || tables_in (indx) || '_BIR DISABLE';
         'ALTER TRIGGER ' || tables_in (indx) || '_BUR DISABLE'; 
After I ran this block, I had 7 new tables, 7 new packages and 7 new triggers in my schema. I didn't write any of that code and will likely never look at it or change it. 

Now, whenever a row was inserted, changed or deleted in those tables, a row is inserted into the "generic" AUDIT_HEADER table and another row is inserted into the appropriate AUD$dg_table_name auditing table. And it works perfectly!

But the question then confronting me was: how do I view and work with this data? Clearly, the answer is to build some pages in the admin interface of the Dev Gym. 

The first and simplest step was to build an interactive report on the AUDIT_HEADER table. Here's my first pass, 100% default behavior:

Nice! But, wait....what does this tell me and what does it not tell me? I can now see when changes were made to the audited tables....but this "generic" table doesn't contain any of the really interesting data, such as which row in the audited table, and the values in that row before the change.

I'd really like to:
  • see the primary key value of the row that was affected. This will make it easier, for example, to filter the above report to show all changes to, say, user 4750.
  • view all the data for that row in the specific audit table
  • edit the affected row using an existing edit page in the Dev Gym
I'm going to show you (the interesting parts of) how I went about this, because it wasn't immediately obvious to me how to make it all work. 

There is a not-so-interesting part I will mention first: I had to create a display-only form for each of the AUD$ tables (such as AUD$DG_CLASSES). That's not very interesting because APEX makes it so, so VERY easy to do this: create a page of type form, based it on the table of interest, select the columns you want to see. After the page is created, change items to display only, remove ability to insert and change. It's tedious, but not very - compared to the effort you'd have to make if you building your UI in - gasp - JavaScript. So consider all that done. :-)

Now, here's the enhanced report:

The link on the aud$id column (primary key in AUDIT_HEADER table) takes me to a page for the audit table for the specific table name (e.g., if DG_CLASSES then go to the form for AUD$DG_CLASSES).

The link on the key value takes me to an existing page that edits the entity, such as the Class Editor page.

So that's what I want....but for those of you familiar with defining links on report columns, you might have a sense of the challenge. I cannot hard-code the page number for either of those links. They depend on the table name in the header row, which means it will/might change for each row. So my link specifications needs to be well, rather dynamic. Here's one of them:
The "#" indicates that these are columns from the report.  Talk about dynamic!

And I'll need even more than that. To drill down to the edit page for an entity, I need to figure out (a) the name of the primary key column for a given table and the primary key value for a given audit header row. 

So let's take a look at the query underlying the report, and then the code I reference in that query.

SELECT aud$tstamp,
       dg_audit_util.audit_key_from_header (aud$id) key_value,
       form_page pagenum,
       'P' || form_page || '_AUD_ID' pageitem,
       edit_page keypage,
       'P' || edit_page || '_' || audit_util.key_colname_from_header (aud$id)
               dg_audit_util.form_page (table_name)
               dg_audit_util.edit_page (table_name)
          FROM audit_header a)

Here's the package specification:

PACKAGE dg_audit_util
   FUNCTION form_page (table_name_in IN VARCHAR2)

   FUNCTION edit_page (table_name_in IN VARCHAR2)

   FUNCTION audit_key_from_header (aud$id_in IN NUMBER)

   FUNCTION key_colname_from_header (aud$id_in IN NUMBER)

Here's the package body. Actually I don't think the details are all that interesting. In the first two functions I am simply encapsulating the lists of relevant page numbers. The key_colname_from_header "reverse engineers" my naming convention which is that (a) table names include an application prefix, are plural and in most cases that means simply added an "S" to the entity name, (b) primary key column name is table name singular form (entity name) with "_ID" added to it. example:

Entity = User

Table name = QDB_USERS

Primary key = USER_ID

But then of course there are exceptions like "DG_CLASSES" and "QDB_QUIZZES," which must be handled by special case.

Entity = Class

Table name = DG_CLASSES

Primary key = CLASS_ID

The audit_key_from_header gets the primary key value from the row in the per-entity audit table (e.g., AUD$DG_CLASSES)

PACKAGE BODY dg_audit_util
   FUNCTION form_page (table_name_in IN VARCHAR2)
      RETURN CASE table_name_in
                WHEN 'DG_CLASSES' THEN 75
                WHEN 'QDB_COMPEV_ANSWERS' THEN 84
                WHEN 'QDB_USERS' THEN 90
                WHEN 'DG_STUDENTS' THEN 92
                WHEN 'DG_USER_WORKOUTS' THEN 101
                WHEN 'QDB_COMP_EVENTS' THEN 93
                ELSE 106

   FUNCTION edit_page (table_name_in IN VARCHAR2)
      RETURN CASE table_name_in
                WHEN 'DG_CLASSES' THEN 35
                WHEN 'QDB_COMPEV_ANSWERS' THEN 3
                WHEN 'QDB_USERS' THEN 109
                WHEN 'DG_STUDENTS' THEN NULL
                WHEN 'QDB_COMP_EVENTS' THEN 54
                ELSE 106

   FUNCTION key_colname_from_header (aud$id_in IN NUMBER)
      l_name       VARCHAR2 (1000);
      l_col_name   VARCHAR2 (1000);
      SELECT table_name
        INTO l_name
        FROM audit_header
       WHERE aud$id = aud$id_in;

      l_col_name :=
         CASE l_name
            WHEN 'DG_CLASSES'
               SUBSTR (l_name,
                       INSTR (l_name, '_') + 1,
                       LENGTH (l_name) - INSTR (l_name, '_') - 1)
               || '_ID'
      RETURN l_col_name;

   FUNCTION audit_key_from_header (aud$id_in IN NUMBER)
      l_name   VARCHAR2 (1000);
      l_key    INTEGER;
      SELECT table_name
        INTO l_name
        FROM audit_header
       WHERE aud$id = aud$id_in;

            'select '
         || key_colname_from_header (aud$id_in)
         || ' from aud$'
         || l_name
         || ' where aud$id = :id'
         INTO l_key
         USING aud$id_in;

      RETURN l_key;
         RETURN NULL;


Popular posts from this blog

Get rid of mutating table trigger errors with the compound trigger

When something mutates, it is changing. Something that is changing is hard to analyze and to quantify. A mutating table error (ORA-04091) occurs when a row-level trigger tries to examine or change a table that is already undergoing change (via an INSERT, UPDATE, or DELETE statement). In particular, this error occurs when a row-level trigger attempts to read or write the table from which the trigger was fired. Fortunately, the same restriction does not apply in statement-level triggers.

In this post, I demonstrate the kind of scenario that will result in an ORA-04091 errors. I then show the "traditional" solution, using a collection defined in a package. Then I demonstrate how to use the compound trigger, added in Oracle Database 11g Release1,  to solve the problem much more simply.

All the code shown in this example may be found in this LiveSQL script.
How to Get a Mutating Table ErrorI need to implement this rule on my employees table:
Your new salary cannot be more than 25x th…

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today:
In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer

Quick Answer
Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption.Test to see if that's fast enough (likely will be for many cases).If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory. Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification.Don't put anything in the collection you don't need. [from Giulio Dottorini]Remember: each session that runs this code will use that amount of memory.Background

When you use BULK COLLECT, you retrieve more than row with each fetch, reducing context switchi…

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post.

PL/SQL is a strongly-typed language. Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type).

PL/SQL offers a wide array of pre-defined data types, both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package).

Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types.

You can't really declare your own "user-defined" scalars, though you can define subtypes from those scalars, which can be very helpful from the perspective…