Skip to main content

New PL/SQL book: Oracle PL/SQL Performance Tuning Tips & Techniques

I recently received a copy of Michael Rosenblum's and Dr. Paul Dorsey's latest book: Oracle PL/SQL Performance Tuning Tips & Techniques. Very impressive!

It's so different from mine: only 300 pages compared to my monster tome of 1000+ pages. Ah, so much easier to hold.

But way more importantly, it is packed full of performance advice, based on the deep, long experiences of two Oracle technologists who have been out in the trenches helping customers put together successful applications that fully leverage Oracle Database and all is core technologies.

There are an awful lot of books on PL/SQL in the market; many of them (inevitably) cover the same material, albeit in different ways.

I found this book to be a very refreshing addition to the mix. It takes a holistic approach, offering glimpses into aspects of Oracle Database architecture and tuning/analysis tools with which most PL/SQL developers are not terribly familiar.

It uncovers some delightful nuggets, such as improving the deterministic caching of user-defined function calls in SQL by placing that function call in a scalar subquery (page 191).

I plan to apply a number of their ideas to the PL/SQL Challenge backend; I also expect to be modifying some of my training materials to reflect their experience in some feature areas for which I am mostly an "academic" presenter. That makes it a book definitely worth having on my bookshelf, and one I can certainly recommend to others!

Of course, no book is perfect. I feel that Oracle PL/SQL Performance Tuning Tips & Techniques could benefit from a clearer statement of use cases for a number of features, such as FORALL and the Function Result Cache. Certainly, many readers will be experienced developers and so perhaps don't need that framing, but be optimistic, fellows! Expect that many readers will be relatively inexperienced developers trying to figure out how to improve their code's performance.

Bottom Line:

If you write PL/SQL or are responsible for tuning the PL/SQL code written by someone else, this book will give you a broader, deeper set of tools with which to achieve PL/SQL success.

Comments

  1. I read your comment on the book, but i guess this book is for advanced performance tuning, do you have any recommendation for new bee like me :), may be this for next level .

    ReplyDelete
  2. I read your comment on the book, but i guess this book is for advanced performance tuning, do you have any recommendation for new bee like me :), may be this for next level .

    ReplyDelete
  3. Yes, you are right - this is for someone with solid experience in Oracle. Well, the good news about PL/SQL is that really is a pretty easy language to learn. You could check out PL/SQL for Dummies or PL/SQL 101 - or just go straight for my big fat Oracle PL/SQL Programming book - it's not REDSIGNED for beginners, but I think it is a pretty accessible and comprehensive text.

    ReplyDelete
  4. Thanks a lot steven, will check that out.

    ReplyDelete

Post a Comment

Popular posts from this blog

Running out of PGA memory with MULTISET ops? Watch out for DISTINCT!

A PL/SQL team inside Oracle made excellent use of nested tables and MULTISET operators in SQL, blending data in tables with procedurally-generated datasets (nested tables).  All was going well when they hit the dreaded: ORA-04030: out of process memory when trying to allocate 2032 bytes  They asked for my help.  The error occurred on this SELECT: SELECT  *    FROM header_tab trx    WHERE (generated_ntab1 SUBMULTISET OF trx.column_ntab)       AND ((trx.column_ntab MULTISET             EXCEPT DISTINCT generated_ntab2) IS EMPTY) The problem is clearly related to the use of those nested tables. Now, there was clearly sufficient PGA for the nested tables themselves. So the problem was in executing the MULTISET-related functionality. We talked for a bit about dropping the use of nested tables and instead doing everything in SQL, to avoid the PGA error. That would, however require lots of work, revamping algorithms, ensuring correctness, you know the score. Then my eyes snagge

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p