Skip to main content

PL/SQL Optimization Levels and Native Code Generation

Charles Wetherell, Consulting Member of the PL/SQL development team, was kind enough to offer these insights regarding PL/SQL optimization and native code generation.

A PL/SQL programmer asked why PL/SQL native code generation was turned off when the PL/SQL optimization level was set to 1.

There are four PL/SQL optimization levels:

 0. Esoteric for some long-since-passed compatibility issues with release 9 and before
 1. Basic code generation with debugging data created
 2. Global optimization
 3. Automatic inlining of local procedures

Each level builds on the level before. Debugging data is not created above level 1.

Generally, native code generation is independent of optimization level. Native code generation is turned off at levels 1 (and 0) because it interferes with debugging. In other words, PL/SQL code compiled at optimization levels 0 and 1 is always interpreted when executed.

You should never use level 0. That is a blanket prescription. Certainly no new code should ever require it.

Level 1 basic code generation applies many simple optimizations. Level 2 code generation adds global analysis that considers the possible flow of control through each subprogram when optimizing. This extra analysis markedly improves the quality of the code generated. The default optimization level is 2 and you should not generally use level 1 unless you specifically want to debug or do something else that requires level 1. Optimization level 2 is likely to speed up PL/SQL code by a factor of 2 to 3. But see the comment on SQL below.

Probably, code will run noticeably faster if you always use level 3; the inlining is controlled in a way that almost always generates a performance improvement. I know of no realistic cases where code ran significantly slower or had other problems because of the use of level 3.

Native compilation will almost always improve code performance significantly as well. This means that the most interesting combinations are:

Native and opt = 2
Native and opt = 3

So far as I know, there are no significant cases where higher optimization levels cause the PL/SQL compiler to slow down noticeably or to consume too many resources during compilation. In other words, compilation expense should not be a factor in deciding what optimization level to use.

Most PL/SQL applications do NOT spend most of the their execution time in PL/SQL. Much of their time is spent doing the SQL triggered by embedded SQL statements in the PL/SQL program. A likely time split is something like 75% of execution time in SQL and 25% in PL/SQL. The PL/SQL compiler settings have essentially no effect on the performance of SQL. Of course, your particular application might have a substantially different time split, but spending more than 50% of application time in PL/SQL would be regarded as unusual.

The documentation describes the PL/SQL compilation controls.

I wrote a blog post about inlining.

When the optimizing PL/SQL code generator was first introduced, I wrote a paper about the definition of PL/SQL and the optimizations that are allowed in PL/SQL programs. You may find that your understanding of PL/SQL is deepened and your programming becomes more sophisticated after you read this.

Comments

Popular posts from this blog

Running out of PGA memory with MULTISET ops? Watch out for DISTINCT!

A PL/SQL team inside Oracle made excellent use of nested tables and MULTISET operators in SQL, blending data in tables with procedurally-generated datasets (nested tables).  All was going well when they hit the dreaded: ORA-04030: out of process memory when trying to allocate 2032 bytes  They asked for my help.  The error occurred on this SELECT: SELECT  *    FROM header_tab trx    WHERE (generated_ntab1 SUBMULTISET OF trx.column_ntab)       AND ((trx.column_ntab MULTISET             EXCEPT DISTINCT generated_ntab2) IS EMPTY) The problem is clearly related to the use of those nested tables. Now, there was clearly sufficient PGA for the nested tables themselves. So the problem was in executing the MULTISET-related functionality. We talked for a bit about dropping the use of nested tables and instead doing everything in SQL, to avoid the PGA error. That would, however require lots of work, revamping algorithms, ensuring correctness, you know the score. Then my eyes snagge

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p