Skip to main content

Change in ALL_ARGUMENTS as of 18c: no more composite "explosion"

The Oracle catalog contains hundreds of views (generally referred to as "data dictionary views") that provide information about the objects stored in the database (tables, views, PL/SQL program units, etc.). Several of these views are extremely helpful to PL/SQL developers in analyzing and managing their code.

Here are a few examples:
  • ALL_OBJECTS - information about all database objects which can be accessed by the current user.
  • ALL_ARGUMENTS - information about every argument of every packaged subprogram and schema-level program unit for which the current user has EXECUTE authority.
  • ALL_IDENTIFIERS - information about identifiers in program units, gathered by the Oracle Database 11g PL/Scope feature.
In this blog post, I explore a change in behavior for ALL_ARGUMENTS (and its USER* and DBA* variants) as of Oracle Database 18c.

ALL_ARGUMENTS lists the arguments of the procedures and functions that are accessible to the current user. USER_ARGUMENTS, as you might know or guess, offers information about those arguments defined in procedures and functions in the currently connected schema.

This view gives developers access to all sorts of interesting information, allowing them to create queries to perform checks on their code.

You can, for example, determine if any arguments of an "undesireable" type, such as CHAR or LONG. You can also find out if a subprogram is a procedure or a function.

You can also determine if a subprogram is a function or a procedure (though this is now also possible through PL/Scope and ALL_IDENTIFIERS). Here's an OracleLiveSQL script showing this usage.

As you likely know, an argument could be a scalar (number, date, string. etc.) or a composite (consisting of multiple "pieces", such as a record or object type). Up until Oracle Database 12c Release 2, the way that ALL_ARGUMENTS recursively expanded composites. In other words, that view contained one row for the argument itself and then one row for each attribute of an object type or field of a record. You can see this below.

I create a table, then a function that accepts a record as an in parameter.
CREATE TABLE plch_employees
   employee_id     INTEGER
 , last_name       VARCHAR2 (100)
 , first_name      VARCHAR2 (100)
 , date_of_birth   DATE
 , hire_date       DATE
 , salary          NUMBER

   employee_in    IN plch_employees%ROWTYPE
 , info_type_in   IN PLS_INTEGER
 , for_date_in    IN DATE)

/* Verify choice correctness */

SELECT COUNT (*) arg_count
  FROM user_arguments 
 WHERE object_name = 'PLCH_GET_INFO' 

Run this came code in 18c and above, and the count of rows is just 4. The following script shows a bit more detail.
  2    FROM user_arguments
  3   WHERE object_name = 'PLCH_GET_INFO'
  4  /
-------------------- -------------------- ----------
PLCH_GET_INFO                                      0
PLCH_GET_INFO        EMPLOYEE_IN                   1
PLCH_GET_INFO        INFO_TYPE_IN                  2
PLCH_GET_INFO        FOR_DATE_IN                   3
4 rows selected.
  2    FROM user_arguments
  3   WHERE object_name = 'PLCH_GET_INFO'
  4  /
-------------------- -------------------- ----------
PLCH_GET_INFO                                      0
PLCH_GET_INFO        EMPLOYEE_IN                   1
PLCH_GET_INFO        EMPLOYEE_ID                   1
PLCH_GET_INFO        LAST_NAME                     2
PLCH_GET_INFO        FIRST_NAME                    3
PLCH_GET_INFO        DATE_OF_BIRTH                 4
PLCH_GET_INFO        HIRE_DATE                     5
PLCH_GET_INFO        SALARY                        6
PLCH_GET_INFO        INFO_TYPE_IN                  2
PLCH_GET_INFO        FOR_DATE_IN                   3

If you relied on that expanded data in ALL_ARGUMENTS in any of your QA scripts, you will now need to join with one of the following views: ALL_PLSQL_TYPES, ALL_PLSQL_TYPE_ATTRS, and ALL_PLSQL_COLL_TYPES.

I'll publish a post soon that explores how to do just that.


  1. Hi Steven,

    Do you happen to know any detail about why that behavior has been changed?
    I am aware about the following bug that is related to the performance of DataPump Import operations and, generally, the amount of space occupied in the SYSTEM tablespace:
    It seems to be quite a nice improvement, however, I have not seen any tests showing the performance effects of that change.
    I guess JDBC should also be affected in addition to any other operations utilizing %_ARGUMENTS views extensively (don't recall anything right now apart from JDBC metadata queries).


    1. Mikhail, I believe you are right on target. A key motivation for the change was to reduce the amount of space need to store argument information.


Post a Comment

Popular posts from this blog

Get rid of mutating table trigger errors with the compound trigger

When something mutates, it is changing. Something that is changing is hard to analyze and to quantify. A mutating table error (ORA-04091) occurs when a row-level trigger tries to examine or change a table that is already undergoing change (via an INSERT, UPDATE, or DELETE statement). In particular, this error occurs when a row-level trigger attempts to read or write the table from which the trigger was fired. Fortunately, the same restriction does not apply in statement-level triggers.

In this post, I demonstrate the kind of scenario that will result in an ORA-04091 errors. I then show the "traditional" solution, using a collection defined in a package. Then I demonstrate how to use the compound trigger, added in Oracle Database 11g Release1,  to solve the problem much more simply.

All the code shown in this example may be found in this LiveSQL script.

How to Get a Mutating Table Error

I need to implement this rule on my employees table:
Your new salary cannot be more than 25x th…

Table Functions, Part 1: Introduction and Exploration

Please do feel encouraged to read this and my other posts on table functions, but you will learn much more about table functions by taking my Get Started with PL/SQL Table Functions class at the Oracle Dev Gym. Videos, tutorials and quizzes - then print a certificate when you are done!

Table functions - functions that can be called in the FROM clause of a query from inside the TABLE operator - are fascinating and incredibly helpful constructs.

So I've decided to write a series of blog posts on them: how to build them, how to use them, issues you might run into.

Of course, I am not the first to do so. I encourage to check out the documentation, as well as excellent posts from Adrian Billington (search for "table functions") and Tim Hall. Adrian and Tim mostly focus on pipelined table functions, a specialized variant of table functions designed to improve performance and reduce PGA consumption. I will take a look at pipelined table functions in the latter part of this seri…

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post.

PL/SQL is a strongly-typed language. Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type).

PL/SQL offers a wide array of pre-defined data types, both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package).

Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types.

You can't really declare your own "user-defined" scalars, though you can define subtypes from those scalars, which can be very helpful from the perspective…