Skip to main content

Execution of DDL in PL/SQL commits TWICE: before and after the statement

You'd think that after working with Oracle Database and PL/SQL since 1990, I'd know everything.

Ha. Not so. :-)

Of course, there are always the new features, such as those coming in 12.2 (I will be blogging about those extensively in the coming months).

But even for features that have been in the language for decades, I still encounter small gaps in my knowledge.

For example, I had long known that when you execute a DDL (data definition language) statement in PL/SQL (which must be done as dynamic SQL via EXECUTE IMMEDIATE or DBMS_SQL.PARSE /EXECUTE) a commit is executed implicitly after the statement.

What I'd somehow missed was that a commit is also performed before the DDL statement is executed. So that is the point of this post:
Oracle Database issues a commit before a DDL statement is executed, and then afterwards as well.
You can see this behavior in action in the script below, which is generated from a recent PL/SQL Challenge quiz and can be run directly in LiveSQL.  All four blocks result in "Count=0" being displayed, because the effect of the DELETE is committed before the DDL statement is executed. Therefore, even when an error is raised when the DDL statement is run, the data's gone.

CREATE TABLE plch_data (n NUMBER)
/

BEGIN
   INSERT INTO plch_data
        VALUES (100);

   COMMIT;
END;
/

CREATE OR REPLACE PROCEDURE plch_show_count
IS
   l_count   INTEGER;
BEGIN
   SELECT COUNT (*) INTO l_count FROM plch_data;

   DBMS_OUTPUT.put_line ('Count=' || l_count);
END;
/

DECLARE
   l_count   INTEGER;
BEGIN
   DELETE FROM plch_data;

   EXECUTE IMMEDIATE 'create table plch_data (n number)';

   plch_show_count;
EXCEPTION
   WHEN OTHERS
   THEN
      plch_show_count;
END;
/

DECLARE
   l_count   INTEGER;
BEGIN
   DELETE FROM plch_data;

   EXECUTE IMMEDIATE 'create table plch_data2 (n number)';

   plch_show_count;
EXCEPTION
   WHEN OTHERS
   THEN
      plch_show_count;
END;
/

DECLARE
   l_count   INTEGER;
BEGIN
   DELETE FROM plch_data;

   EXECUTE IMMEDIATE 'create table plch_data (n number)';

   ROLLBACK;
   plch_show_count;
EXCEPTION
   WHEN OTHERS
   THEN
      ROLLBACK;
      plch_show_count;
END;
/

DROP TABLE plch_data2
/

DECLARE
   l_count   INTEGER;
BEGIN
   DELETE FROM plch_data;

   EXECUTE IMMEDIATE 'create table plch_data2 (n number)';

   ROLLBACK;
   plch_show_count;
EXCEPTION
   WHEN OTHERS
   THEN
      ROLLBACK;
      plch_show_count;
END;
/

DROP TABLE plch_data2
/

DROP TABLE plch_data
/

DROP PROCEDURE plch_show_count
/

Comments

  1. Hi, is it ddl committing or you do not need to commit to see that you have no rows, because you're in the same session:



    DECLARE
    l_count INTEGER;
    BEGIN
    DELETE FROM plch_data;
    select count(*) into l_count from plch_data;
    END;
    /

    ReplyDelete
  2. Mustafa, what you say is correct: you can immediately see the effects of your changes within your own session. No one ELSE can see those changes until you commit. So the interesting part of the script in this post is that even after you rollback in your session, the changes have been saved - if a DDL statement was executed "in between". Make sense?

    ReplyDelete
  3. made sense of course but the part that you're missing is those demos show us that ddls make an implicit commit.
    How do you observe the second commit?

    ReplyDelete
  4. Huh. Good point. I suppose the impact of the second commit is seen by the results of your DDL statement being visible in the data dictionary!

    ReplyDelete
  5. Hi Steven,

    A reader of your blog has posted an interresting case in french forum where commit didn't occur when the DDL has syntax error.
    However, it occurs when syntax is ok but semantic parse raise error. I've put my test on livesql

    First is syntax error (table name is a reserved word) -> no commit
    Second is semantic error (table name already exists) -> commit

    Regards,
    Franck.

    ReplyDelete
    Replies
    1. Thanks, Frank. That's very interesting and good to know. Perhaps it will even make it onto an Oracle Dev Gym quiz soon!

      Delete

Post a Comment

Popular posts from this blog

Get rid of mutating table trigger errors with the compound trigger

When something mutates, it is changing. Something that is changing is hard to analyze and to quantify. A mutating table error (ORA-04091) occurs when a row-level trigger tries to examine or change a table that is already undergoing change (via an INSERT, UPDATE, or DELETE statement). In particular, this error occurs when a row-level trigger attempts to read or write the table from which the trigger was fired. Fortunately, the same restriction does not apply in statement-level triggers. In this post, I demonstrate the kind of scenario that will result in an ORA-04091 errors. I then show the "traditional" solution, using a collection defined in a package. Then I demonstrate how to use the compound trigger, added in Oracle Database 11g Release1,  to solve the problem much more simply. All the code shown in this example may be found in this LiveSQL script . How to Get a Mutating Table Error I need to implement this rule on my employees table: Your new salary cannot be mo

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,

Working With JSON Arrays in PL/SQL

Oracle Database 12c Release 2 built upon the 12.1 SQL/JSON features by adding a number of builtin object types (similar to classes in object-oriented languages) for manipulating JSON data in PL/SQL blocks. In this post, I explore some of the array-oriented JSON features, all made available through the JSON_ARRAY_T type and its methods. Just like a class, an object type offers a pre-defined constructor function to instantiate new instances of that type, static methods and member methods. Here are the methods you are most likely to use: A couple of things to remember, generally, about working with JSON elements generally and JSON arrays specifically in PL/SQL: Error Handling Behavior By default, if an error occurs when you call a member method for your JSON array (or object), NULL is returned. In other words, an exception is not  raised back to your block. If you want errors to be propagated from the method as an exception, call the ON_ERROR method and pass a value greate