Skip to main content

PL/SQL Brain Teaser: when does a COMMIT not commit?

So your users make changes to tables (with great care and security, through your PL/SQL API), your app calls this procedure and that function. At some point along the way, a COMMIT statement is executed successfully in a user's session.

Yet that same user's session still has uncommitted changes!

Huh? How is this possible?

Think you know? Comment below!

Comments

  1. The commit was within an autonomous transaction.

    ReplyDelete
  2. commit write nowait/batch?

    ReplyDelete
  3. Autonomous Transaction :allows you to leave the context of the calling transaction, perform an independent transaction, and return to the calling transaction without affecting it's state.Hence the uncommitted changes in the users session are still significant.

    ReplyDelete
  4. Autonomous transaction! Love the brain teaser!!

    ReplyDelete
  5. Just few untested random use-cases in a hurry. Feel free to correct.
    1. Dirty data is in bulk collected collections. Processing commits in loop.
    2. View with instead of trigger. Trigger only uses partial data to update.
    3. Part of dirty data is written to a external table on really slow IO device.
    4. Distributed transaction. Part of dirty data has to be updated on some other DB.
    5. Call to external service from API with part of dirty data.
    6. Trigger on table allows selective data to update?

    ReplyDelete
  6. It all depends where in the process chain the commit was executed, if there were any dml statements after it, and if any rollback (to savepoint) was executed...

    ReplyDelete
  7. And here are some thoughts offered up on LinkedIn:

    * Is the commit statement outside the procedure? It so, it may be out of scope. Is there a rollback somewhere in the procedure or function?

    * The procedure having COMMIT might have executed with autonomously. So the user changes outside that procedure are still in process and not yet committed.

    ReplyDelete
  8. And now my answer: definitely, the answer in my mind is that the COMMIT statement was executed within an autonomous transaction subprogram.

    If you include

    PRAGMA AUTONOMOUS_TRANSACTION;

    in the declaration section of your procedure or function, then a COMMIT in that subprogram will commit only those changes made in the scope of that subprogram.

    Other outstanding changes in my session will NOT be committed.

    Now as to your comments:

    @stanley, I'd love to hear more explanation about some of your items, as they are outside my area of expertise. I am not sure if a distributed xaction applies here, since I reference the "user's session".

    @john, certainly any DML statements executed after the commit would be uncommitted. ROLLBACK TO before the commit would remove outstanding changes. After the commit? Well, the teaser has to do with state of session right after the commit (implied: before other actions take place).

    Thanks for participating!

    ReplyDelete

Post a Comment

Popular posts from this blog

Running out of PGA memory with MULTISET ops? Watch out for DISTINCT!

A PL/SQL team inside Oracle made excellent use of nested tables and MULTISET operators in SQL, blending data in tables with procedurally-generated datasets (nested tables).  All was going well when they hit the dreaded: ORA-04030: out of process memory when trying to allocate 2032 bytes  They asked for my help.  The error occurred on this SELECT: SELECT  *    FROM header_tab trx    WHERE (generated_ntab1 SUBMULTISET OF trx.column_ntab)       AND ((trx.column_ntab MULTISET             EXCEPT DISTINCT generated_ntab2) IS EMPTY) The problem is clearly related to the use of those nested tables. Now, there was clearly sufficient PGA for the nested tables themselves. So the problem was in executing the MULTISET-related functionality. We talked for a bit about dropping the use of nested tables and instead doing everything in SQL, to avoid the PGA error. That would, however require lots of work, revamping algorithms, ensuring correctness, you know the score. Then my eyes snagge

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p