Thursday, March 31, 2016

Use TREAT to Access Attributes of Object Subtypes

The TREAT function comes in very handy when working with an object type hierarchy, and you need to access attributes or methods of a subtype of a row or column's declared type. This topic was covered in a PL/SQL Challenge quiz offered in March 2016. 

Suppose I have the following type hierarchy and  I use them as column types in my meals table:

CREATE TYPE food_t AS OBJECT (
   name VARCHAR2 (100),
   food_group VARCHAR2 (100),
   grown_in VARCHAR2 (100)
)
   NOT FINAL;
/

CREATE TYPE dessert_t UNDER food_t (
      contains_chocolate CHAR (1),
      year_created NUMBER (4)
   )
   NOT FINAL;
/

CREATE TYPE cake_t UNDER dessert_t (
      diameter NUMBER,
      inscription VARCHAR2 (200)
   );
/

CREATE TABLE meals
(
   served_on     DATE,
   appetizer     food_t,
   main_course   food_t,
   dessert       dessert_t
);

I then insert some rows into the table:

BEGIN
   INSERT INTO meals
        VALUES (SYSDATE + 1,
           food_t ('Shrimp cocktail', 'PROTEIN', 'Ocean'),
           food_t ('Stir fry tofu', 'PROTEIN', 'Vat'),
           cake_t ('Apple Pie',
                    'FRUIT', 'Baker''s Square',
                    'N', 2001, 8, NULL));

   INSERT INTO meals
        VALUES (SYSDATE + 1,
                food_t ('Fried Calamari', 'PROTEIN', 'Ocean'),
                dessert_t ('Butter cookie',
                   'CARBOHYDRATE', 'Oven', 'N', 2001),
                cake_t ('French Silk Pie',
                   'CARBOHYDRATE', 'Baker''s Square',
                   Y', 2001, 6, 'To My Favorite Frenchman'));

   INSERT INTO meals
        VALUES (SYSDATE + 1,
                food_t ('Fried Calamari', 'PROTEIN', 'Ocean'),
                cake_t ('French Silk Pie',
                        'CARBOHYDRATE', 'Baker''s Square',
                        'Y', 2001, 6, 'To My Favorite Frenchman'),
                dessert_t ('Butter cookie',
                           'CARBOHYDRATE', 'Oven', 'N', 2001));
   COMMIT;
END;
/

Notice that even though appetizer and main_course are defined as food_t, I can assign dessert_t and cake_t instances to those columns, because object types support substitutability (the best way to understand that is: every dessert is a food, but not every food is a dessert).

Let's take a look at some of the ways I can use TREAT.

1. I want to find all the meals in which the main course is actually a dessert.

SELECT *
  FROM meals
 WHERE TREAT (main_course AS dessert_t) IS NOT NULL


2. Show whether or not those dessert-centric meals contain chocolate. First with PL/SQL:


DECLARE
   l_dessert   dessert_t;
BEGIN
   FOR rec IN (
      SELECT * FROM meals
       WHERE TREAT (main_course AS dessert_t) IS NOT NULL)
   LOOP
      l_dessert := TREAT (rec.main_course AS dessert_t);

      DBMS_OUTPUT.put_line (
         rec.main_course.name || '-' || 
         l_dessert.contains_chocolate);
   END LOOP;
END;
/


And now with "pure" SQL:

SELECT TREAT (m.main_course AS dessert_t).contains_chocolate
  FROM meals m
 WHERE TREAT (main_course AS dessert_t) IS NOT NULL


The thing to realize in both these cases is that even though I have identified only those meals for which the main course is a dessert, I still must explicitly TREAT or narrow the main_course column to dessert_t, before I will be able to reference the contains_chocolate attribute.


If I forget the TREAT in the SELECT list, such as:

SELECT m.main_course.contains_chocolate
  FROM meals m
 WHERE TREAT (main_course AS dessert_t) IS NOT NULL

I will see this error:

ORA-00904: "M"."MAIN_COURSE"."CONTAINS_CHOCOLATE": invalid identifier

3. Set to NULL any desserts that are not cakes.

UPDATE meal
   SET dessert = TREAT (dessert AS cake_t);

Tuesday, March 22, 2016

The Case of the Confusing CASE

This odd little piece of code was featured in the weekly PL/SQL Challenge quiz 12 March - 18 March 2016.

What do you think will be displayed after executing the following block?
DECLARE
   my_flag   BOOLEAN;
BEGIN
   CASE my_flag
      WHEN my_flag IS NULL
      THEN
         DBMS_OUTPUT.PUT_LINE ('my_flag is NULL');
      WHEN TRUE
      THEN
         DBMS_OUTPUT.PUT_LINE ('my_flag is TRUE');
      ELSE
         DBMS_OUTPUT.PUT_LINE ('my_flag is FALSE');
   END CASE;
END;
/
At first glance (if you are like me), you would say "my_flag is NULL", right?

After all, my_flag is initialized to NULL when declared, and I don't change the value.

But, lo and behold, you will see:
my_flag is FALSE
Curious, right?

So what's going on? Well, we have a very confused and confusing piece of code: I have written a simple CASE (which is of the form CASE expression WHEN ...), but then my WHEN clauses follow a typical searched CASE format (CASE WHEN expr1 ... WHEN expr2 ...).

CASE is a really wonderful feature in PL/SQL (and many other languages, of course), but you need to make sure you use it properly.

Monday, March 21, 2016

Help Celebrate One Million Answers at the PL/SQL Challenge!


Sometime in the next couple of months, someone will submit the 1,000,000th answer on the PL/SQL Challenge.
And Chris Saxon, member of the AskTom Answer Team, SQL Wizard and Database Design Quizmaster, assures me that he will write a query to identify exactly who that person is (will be).
That's a very nice milestone, so we figured it will be a good time to celebrate. 
We are still sorting out precisely what we will do, but for sure, we will want to feature stories from our players:
  • How the PL/SQL Challenge has helped you in your career
  • High points (or low points) from your activity on the site (I have low points, which I will share, hopefully none of you do!)
  • Interesting stories from the last six years of answering and commenting on quizzes
  • Whatever else comes to mind!
You are welcome to post your stories here or the PL/SQL Challenge blog or email them directly to steven.feuerstein@oracle.com. I will be collating them for publication on the site.
I would also be very pleased to do some video interviews of players, reviewers, etc. After all, we do live in the Age of YouTube.
So if you are open to or interested in that, let me know. 
Thanks!
Steven Feuerstein 

Accessing index of associative array in SELECT-FROM TABLE() operation

As of Oracle Database 12c Release 1, you can now use the TABLE operator with associative arrays whose types are declared in a package specification. Prior to 12.1, this was only possible with schema-level nested table and varray types. Here's a quick example:

CREATE OR REPLACE PACKAGE aa_pkg AUTHID DEFINER
IS
   TYPE record_t IS RECORD (nm VARCHAR2 (100), sal NUMBER);

   TYPE array_t IS TABLE OF record_t INDEX BY PLS_INTEGER;

   FUNCTION my_array RETURN array_t;
END;
/

 CREATE OR REPLACE PACKAGE BODY aa_pkg
IS
   FUNCTION my_array RETURN array_t
   IS
      l_return   array_t;
   BEGIN
      l_return (1).nm := 'Me'; l_return (1).sal := 1000;
      l_return (200).nm := 'You'; l_return (200).sal := 2;
      RETURN l_return;
   END;
END;
/

 DECLARE
   l_array   aa_pkg.array_t;
BEGIN
   l_array := aa_pkg.my_array;

   FOR rec IN (  SELECT * FROM TABLE (l_array) ORDER BY nm)
   LOOP
      DBMS_OUTPUT.put_line (rec.nm);
   END LOOP;
END;
/

And with nested tables and varrays, the index values used in those collections are usually not very interesting. Varrays are always densely-filled (no gaps between first and last index values). Nested tables are (or should be treated as) multisets - the order of its elements is not significant.

But with associative arrays, index values can be negative. They can be strings. And it is not terribly uncommon for associative arrays to be sparse (not every index value defined between first and last).

What if you need to access those index values in the dataset returned by the TABLE operator? In this case, I am sorry to have to tell you, you have to do some extra work. That information simply isn't available natively - which, I think, is quite reasonable. When you use SELECT-FROM TABLE you are saying, in effect, I want to treat the data as a virtual tables. Tables don't have indexes built into them. You have to specify them "on top" of the table.

In any case, the solution isn't terribly difficult. You simply add a field to your record (or attribute to object type). Or if your collection is currently a collection of scalars (list of dates or strings or numbers), then you will have to create a record or object type to hold that scalar value, plus the index value. Then that index value is available as "just another column" in your query.

Here are the steps:

1. Recompile the package specification, after adding an index field. The package body remains the same as above, so I won't repeat it below.

CREATE OR REPLACE PACKAGE aa_pkg AUTHID DEFINER 
IS 
   TYPE record_t IS RECORD 
   ( 
      idx   INTEGER, 
      nm    VARCHAR2 (100), 
      sal   NUMBER 
   ); 
 
   TYPE array_t IS TABLE OF record_t INDEX BY PLS_INTEGER; 
 
   FUNCTION my_array  RETURN array_t; 
END; 

2. Reference that index value as a column in your query:

DECLARE 
   l_array   aa_pkg.array_t; 
   l_index   PLS_INTEGER; 
BEGIN 
   l_array := aa_pkg.my_array; 
 
   l_index := l_array.FIRST; 
 
   WHILE l_index IS NOT NULL 
   LOOP 
      l_array (l_index).idx := l_index; 
      l_index := l_array.next (l_index); 
   END LOOP; 
 
   FOR rec IN (  SELECT * FROM TABLE (l_array) ORDER BY idx) 
   LOOP 
      DBMS_OUTPUT.put_line (rec.idx || ' = ' || rec.nm); 
   END LOOP; 
END; 


This script can be executed in its entirety on LiveSQL.

Thursday, March 17, 2016

Nine Good-to-Knows about PL/SQL Error Management


1. Exceptions raised in the declaration section are not handled in the exception section.

This sometimes surprises a developer new to PL/SQL. The exception section of a PL/SQL block can only possibly handle an exception raised in the executable section. An exception raised in the declaration section (in an attempt to assign a default value to a variable or constant) always propagates out unhandled to the enclosing block.


2. An exception raised does not automatically roll back uncommitted changes to tables.

Any non-query DML statements that complete successfully in your session are not rolled back when an exception occurs - either directly in PL/SQL or propagated out from the SQL engine. You still have the option of either committing or rolling back yourself.

If, however, the exception goes unhandled out to the host environment, a rollback almost always occurs (this is performed by the host environment).


3. You can (and should!) name those unnamed ORA errors (never hard-code an error number).

Oracle Database pre-defines a number of exceptions for common ORA errors, such as NO_DATA_FOUND and VALUE_ERROR. But there a whole lot more errors for which there is no pre-defined name. And some of these can be encountered quite often in code. The key thing for developers is to avoid hard-coding these error numbers in your code. Instead, use the EXCEPTION_INIT pragma to assign a name for that error code, and then handle it by name.


4. If you do not re-raise an exception in your exception handler, the outer block doesn't know an error has occurred.

Just sayin'. You have a subprogram that invokes another subprogram (or nested block). That "inner" subprogram fails with an exception. It contains an exception handler. It logs the error, but then neglects to re-raise that exception (or another). Control passes out to the invoking subprogram, and it continues executing statements, completely unaware that an error occurred in that inner block. Which means, by the way, that a call to SQLCODE will return 0. This may be just what you want, but make sure you do this deliberately.


5. Whenever you log an error, capture the call stack, error code, error stack and error backtrace.

Ideally, this is a total non-issue for you, because you simply invoke a generic logger procedure in your exception handlers (example and recommendation: download and use Logger, an open source utility that does almost anything and everything you can think of).

But if you are about to write your own (or are using a home-grown logging utility), make sure that you cal and store in your log (likely a relational table), the values returned by:
  • SQLCODE
  • DBMS_UTILITY.FORMAT_CALL_STACK (or corresponding subprograms in 12.1's UTL_CALL_STACK package) - answers question "How did I get here?"
  • DBMS_UTILITY.FORMAT_ERROR_STACK (or corresponding subprograms in 12.1's UTL_CALL_STACK package) - answers question "What is my error message/stack?" We recommend using this instead of SQLERRM.
  • DBMS_UTILITY.FORMAT_ERROR_BACKTRACE (or corresponding subprograms in 12.1's UTL_CALL_STACK package) - answers question "On what line was the error raised?"

6. Always log your error (and backtrace) before re-raising the exception.

When you re-raise an exception, you will reset the backtrace (the track back to the line on which the error was raised) and might change the error code (if you raise a different exception to propagate the exception "upwards").

So it is extremely important to call you error logging subprogram (see previous Good to Know) before you re-raise an exception.


7. Compile-time warnings will help you avoid "WHEN OTHERS THEN NULL".

One of Tom Kyte's favorite pet peeves, the following exception sections "swallow up" errors.

EXCEPTION WHEN OTHERS THEN NULL;

EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE (SQLERRM);

In fact, any exception handler that does not re-raise the same exception or another, runs the risk of hiding errors from the calling subprogram, your users, and yourself as you debug your code.

Generally, you should log the error, then re-raise it. 

There are certainly some cases in which this advice does not hold (for example: a function that fetches a single row for a primary key. If there is no row for the key, it's not an application error, so just return NULL). In those cases, include a comment so that the person maintaining your code in the distant future knows that you weren't simply ignoring the Wisdom of the Kyte. Example:

EXCEPTION WHEN OTHERS THEN /* No company or this ID, let calling subprogram decide what to do */ RETURN NULL;

One way to avoid this problem is to turn on compile-time warnings. Then when your program unit is compiled, you will be warned if the compiler has identified an exception handler that does not contain a RAISE statement or a call to RAISE_APPLICATION_ERROR.


8. Use LOG ERRORS to suppress SQL errors at the row level.

The impact of a non-query DML statement is usually "all or nothing". If my update statement identifies 100 rows to change, then either all 100 rows are changed or none are. And none might be the outcome if, say, an error occurs on just one of the rows (value too large to fit in column, NULL value for non-NULL column, etc.).

But if you have a situation in which you would really like to "preserve" as many of those row-level changes as possible, you can add the LOG ERRORS clause to your DML statement. Then, if any row changes raise an error, that information is written to your error log table, and processing continues.

IMPORTANT: if you use LOG ERRORS, you must must must check that error log table immediately after the DML statement completes. You should also enhance the default error log table.


9. Send an application-specific error message to your users with RAISE_APPLICATION_ERROR.

If you execute a SELECT-INTO that does not identify any rows, the PL/SQL runtime engine raises: ORA-01403 and the error message (retrieved via SQLERRM or DBMS_UTILITY.FORMAT_ERROR_STACK) is simply "No data found".

That may be exactly what you want your users to see. But there is a very good chance you'd like to offer something more informative, such as "An employee with that ID is not in the system."

In this case, you can use RAISE_APPLICATION_ERROR, as in:

CREATE OR REPLACE PACKAGE BODY employees_mgr IS FUNCTION onerow (employee_id_in IN hr.employees.employee_id%TYPE) RETURN hr.employees%ROWTYPE RESULT_CACHE IS l_employee hr.employees%ROWTYPE; BEGIN SELECT * INTO l_employee FROM hr.employees WHERE employee_id = employee_id_in; RETURN l_employee; EXCEPTION WHEN NO_DATA_FOUND THEN raise_application_error ( -20000, 'An employee with that ID is not in the system.'); END; END;

Verify on LiveSQL
Send Application-specific Error Message To Users With RAISE_APPLICATION_ERROR



Wednesday, March 16, 2016

A Checklist for Bulk Processing Conversions in PL/SQL

[ Note: download the latest version of the checklist from OTN. You can comment on the checklist there, or on this blog post. Thanks!]

Bulk processing (FORALL and BULK COLLECT) is pretty much the most powerful and important performance feature available for the PL/SQL language. It addresses head-on one of the most common drags on performance in Oracle Database applications: row-by-row processing.

The basic "anti-pattern" to look for in your code is a loop that contains non-query DML (insert, update, delete merge). This "DML loop" could be in PL/SQL code, but it could also appear in Java, Python, C, etc. Here's an example (DML statements in red):

PROCEDURE upd_for_dept (
   dept_in     IN employees.department_id%TYPE
 , newsal_in   IN employees.salary%TYPE)
IS
   CURSOR emp_cur
   IS
      SELECT employee_id, salary, hire_date
        FROM employees
       WHERE department_id = dept_in;
BEGIN
   FOR rec IN emp_cur
   LOOP
      BEGIN
         INSERT INTO employee_history (employee_id, salary, hire_date)
              VALUES (rec.employee_id, rec.salary, rec.hire_date);

         rec.salary := newsal_in;

         calc_pkg.adjust_compensation (rec.employee_id, rec.salary);

         UPDATE employees
            SET salary = rec.salary
          WHERE employee_id = rec.employee_id;
      EXCEPTION
         WHEN OTHERS
         THEN
            log_error;
      END;
   END LOOP;
END upd_for_dept;

The process by which you convert from row-by-row (or as Tom Kyte likes to call it, "slow by slow") to bulk processing is straightforward but very much non-trivial. When you are done, your code volume and complexity will have increased, usually substantially. But when you can achieve an order of magnitude improvement in performance, it's a price worth paying.

While the conversion process is straightforward, it is also a process you need to follow with great care. After all, you are modifying transaction-related code (changes to tables). There is no room for error; you can't afford to mess up those transactions or allow bad data to make its way into your database.

To help you do it right, I created a checklist of items to think through and take care of in your code as you move from row-by-row to bulk. I offer the checklist below, but the latest version can always be found on the Oracle Technology Network.


Scenario: You have a loop in your code that executes one or more non-query DML statements inside of it. It is running too slowly and you need to make it go faster. Can FORALL with BULK COLLECT help? Follow this checklist for a successful conversion.

page1image3920
[]
page1image6256
1. Analyze and document all aspects of your transaction.
Behavior post-conversion must be the same. Of particular importance is to be clear on what happens when an error occurs. Do you stop immediately? Do you log the error and continue? If multiple DML statements inside the loop, how does an error in one statement affect what happens in the other statements? Do you commit at the end of the process, with each iteration of the loop?

[]
page1image12512
2. Build a regression test from the above analysis.
Sorry, but it must be said: the best (only?) way to ensure that your bulk-ified logic is error-free is to turn your documentation of current behavior into a regression test to also run against the new code. Check out SQL Developer's unit testing feature, utPLSQL or another framework.

[]
page1image18160
3. Make sure you need to use PL/SQL's bulk processing features.
Can you do everything you need in "pure" SQL? Can you leverage DBMS_PARALLEL_EXECUTE or pipelined table functions? Generally, you need a PL/SQL-driven solution if you need to perform complex procedural steps on each row of data. So you do that first in PL/SQL on collections, then push back to the database.

page1image21952
[]
4. Implement bulk processing phases as nested subprograms.
There are three basic "phases" with bulk processing:
1. Load collections, usually with BULK COLLECT.
2. Modify collections (complex procedural logic). That's why you can't use "pure" SQL. 3. Push collections to table(s) with FORALL. One for each DML statement.

Bulk code can get very long and complex, so the best approach is to encapsulate each step inside its own subprogram and keep the main body short and readable. And if any of your subprograms has an OUT or IN OUT collection type parameter, don't forget to add the NOCOPY hint!

[]
page1image30304
5. Use BULK COLLECT LIMIT and soft-code the limit.
Avoid "unlimited" SELECT BULK COLLECT INTO statements, since you could use too much PGA memory (now or in the future). Instead, declare a cursor, and within a simple loop FETCH BULK COLLECT INTO with a LIMIT clause. Soft-code the limit, preferably as a parameter to the subprogram. And keep in mind:
  • A default of 100 is a good start but be ready to experiment with larger numbers.
  • To terminate the loop, use EXIT WHEN your_collection.COUNT = 0; immediately after FETCH or EXIT WHEN your_cursor%NOTFOUND; just before END LOOP statement.
[]
6. Determine how to communicate between FORALL steps.
If your loop has > 1 DML statement, it will have > 1 FORALL statement. An error in one DML statement often affects what happens in a subsequent DML statement. So when you switch to bulk, you must communicate the impact of failures in one FORALL to another, later one. SAVE EXCEPTIONS and SQL%BULK_EXCEPTIONS will be key for this.

[]
page1image44152
7. FORALL bind arrays sparse or selectively using bind array elements?
BULK COLLECT always fills sequentially, but sometimes you will have sparse arrays or need to selectively use elements in your bind array for DML processing. In this case, switch from FORALL indx IN low .. high to FORALL indx IN INDICES OF or FORALL indx IN VALUES OF.

page1image48312
Thanks to padders, rp2043 and BluShadow for improving this checklist.

Links to resources on bulk processing:


Tuesday, March 8, 2016

I Love Backups and APIs, a.k.a., Test Delete Processes VERY THOROUGHLY

The PL/SQL Challenge offers weekly quizzes on SQL, PL/SQL, Database Design, Application Express, and Logic. A week on this website starts on Saturday at 00:00:00  UTC and ends on Friday at 23:59:59 UTC.

So early in the morning on Saturday, we run weekly processes to rank the past week's quizzes, and do a whole lot of cleaning-up.

Therein lies the tale of this post. But before getting into the story, a little "ancient history."

In 1978-1980, while I attended the University of Rochester, I took three "101" software classes (Algol, Lisp and something else). That is the extent of my formal training as a software developer. Move forward in time 36 years or so, and here I am: a highly regarded PL/SQL programmer, trainer, author.

Still, though, essentially self-taught and definitely not a software engineer.

I have the highest regard for engineers in general, and software engineers in particular. They combine deep knowledge with a strong sense of discipline. The result, often though of course not always, is a product (be it a bridge or an application) that is relatively bug-free and works well for a long time.

My sense is that most software developers are not engineers. But I don't need to speak about other developers. I can speak with absolute clarity on this topic just by looking in the mirror as I type.

I sometimes shock my audiences by telling them I am an "amateur" developer and I imagine they don't believe me. But then sometimes I do something that drives the point home all too well.

Like this past Saturday. I was informed by some players on the PL/SQL Challenge that the results for last week's quizzes were not showing up. This is usually due to an error during the refresh of the ranking materialized views. I took a look and found that the problem was much worse: all of the answers to last week's quizzes were gone. As in deleted.

"How could this be?" I shouted silently. Well, you know how you get this feeling sometimes in your stomach, that feeling that says "You know how this could have happened, don't you, Steven?"

It didn't take me long to track it down. I kinda remembered working on cleanup code that would remove answers submitted by an admin on the site (an admin cannot compete in these quizzes). I kinda also remembered that I hadn't ever really finished it, but that was OK, because the logic was in a subprogram that wasn't invoked.

I kinda also remembered how bad my memory is. So I looked at the code and found:

1. The subprogram was invoked by the main process. Uh-oh.

2. The WHERE clause on the cleanup routine was awful, just awful. Instead of saying "Remove any answers submitted by a domain admin.", it said in effect "Remove any answers submitted for questions that were written by a domain admin."

Most of our quizzes are written by the admins in their respective domains.

In other words: bye, bye, quizzes!

Sweat beaded on my brow. My fingers shook. A whole week's worth of quiz activity gone? Awful to contemplate (though, of course, not the end of the world).

I immediately ran a flashback query but sadly too much time had already passed since the delete. I could not recover the data myself.

Fortunately (and not the least bit surprisingly), our operational IT staff was able to restore the data and in the end "no harm done." And that's because they use Oracle DataGuard to create a standby database synched to the production database. They then used our wonderful Flashback feature to go back to the desired point in time, and then copy the three depleted tables to another instance for the recovery process. So nice....

But I was (and am) mortified at how unprofessional I was in this situation. Obiously, I didn't test this enhancement. My excuse is that I thought I wasn't done with it and had kept it out of the execution flow. Still I made changes to the package and didn't verify that the code was disabled.

But my deeper shame was a realization as to how much I had veered from one of my favorite and most important best practices:

Hide your SQL behind an API.
a.k.a., Stop Writing So Much SQL!

In other words, from my application code (in this case, the Application Express interface and even many of the "behind the scenes" packages), I should not execute any non-query DML directly (inserts, updates, deletes). Instead, I should call predefined procedures to get the job done.

[For lots more detail on this approach, check out Bryn Llewellyn's Why Use PL/SQL?]

For example, I already had in place a "remove answer" procedure that copied the answers to an archive table before doing the actual delete.

   PROCEDURE remove_answer (comp_event_id_in    IN INTEGER,
                            user_id_in          IN INTEGER,
                            reverse_points_in   IN BOOLEAN DEFAULT TRUE)
   IS
   BEGIN
      /* Copy rows to archive table */
      INSERT INTO qdb_compev_answers_d ...

      /* Now do the delete */
      DELETE FROM qdb_compev_answers eva
            WHERE     eva.comp_event_id = comp_event_id_in
                  AND eva.user_id = user_id_in;
   END;

But I hadn't been careful about always using it. I found more than one place in my code that executed a DELETE FROM qdb_compev_answers statement directly. - and therefore not copying the rows to the archive first. Including the horrible, no-good, very bad statement with the wrong where clause.

So after I posted in my request to IT to PLEASE HELP ME ASAP!, I searched across all my package bodies for "delete from qdb_compev_answers" and replaced them with calls to qdb_player_mgr.remove_answer. And, yes, I also searched my APEX application code to make sure I wasn't an even worse programmer than I already felt myself to be, by sticking non-query DML in that level of the stack.

At least I can report with some, ahem, shred of self-dignity, that the APEX application did not do any deletes against the answers table that did not go through the API.

So in the end, all data was restored. All players were happy. And my code quality improved. Even the IT support team was pleased to exercise and validate their recovery processes.

But did I learn my lesson? Will I be more careful with my code? Will I be more rigorous about using my SQL API?

Yes, yes! Lesson learned! I promise!

* Funny story about engineers:  Got on a crowded plane recently, with a seat all the way in the back. Made it to 29D and was about to sit in it, when I noticed a fellow right behind me. He said: "I'm in 29F." We looked towards the window. There was already somebody in 29F. He looked back at us. After a moment, I shrugged and asked him: "Are you sure you're supposed to be in 29F?" He looked up and said: "I'm a mechanical engineer. I think I know how to find the right seat on an airplane." Um, OK. Thirty seconds later, he looked up again, this time sheepishly: "Huh. I'm supposed to be in 30F. Sorry about that."  Moral of the story: Even engineers get it wrong sometimes.