Skip to main content

Table Functions, Part 1: Introduction and Exploration

Please do feel encouraged to read this and my other posts on table functions, but you will learn much more about table functions by taking my Get Started with PL/SQL Table Functions class at the Oracle Dev Gym. Videos, tutorials and quizzes - then print a certificate when you are done!


Table functions - functions that can be called in the FROM clause of a query from inside the TABLE operator - are fascinating and incredibly helpful constructs.

So I've decided to write a series of blog posts on them: how to build them, how to use them, issues you might run into.

Of course, I am not the first to do so. I encourage to check out the documentation, as well as excellent posts from Adrian Billington (search for "table functions") and Tim Hall. Adrian and Tim mostly focus on pipelined table functions, a specialized variant of table functions designed to improve performance and reduce PGA consumption. I will take a look at pipelined table functions in the latter part of this series.

Here's my plan for the series:

1. I will start (in this post) with some very simple examples and exploration of a few use cases.

2. Explore table functions that return more than one value per row (via object types).

3. Take a look at how the PL/SQL Challenge uses table functions to avoid code redundancy and minimize the number of interactive reports needed to provide rankings.

4. Streaming table functions: these kinds of table functions are used typically in data warehousing environments, in which one or more transformations of data are needed all "streamed" within a single SQL statement.

5. Pipelined table functions: how they differ from non-pipelined, what they look like, why you would use them.

6. Optimizing execution of table functions: it helps to provide the cost-based optimizer with some additional information so that it can come up with the best plan.

The Basic Idea

I assume you are familiar with the SQL SELECT statement:

SELECT column_list FROM table_or_view1 [, table_or_view2...]

Several versions back of Oracle Database, Oracle added the table operator, which transforms a collection's contents into a relational dataset that can be consumed by a SELECT. The syntax looks like:

SELECT column_list FROM TABLE (my_collection)

and you can also call a function from within the TABLE operator that returns a collection, as in:

SELECT column_list FROM TABLE (my_function (...))

Which means you can emulate a "parameterized view", since you can pass values to the function's parameter list) based on a TABLE operator.

Now, a "true" parameterized view might look something like this:

CREATE VIEW my_view (deparment_id_in in INTEGER)
AS 
   SELECT column_list FROM TABLE (my_function (department_id_in))

But that is not supported in Oracle Database. Instead, a "parameterized view" is, in effect, the table function itself. Your report (or wherever else you need the "parameterized view") would look more or less like:

SELECT * FROM TABLE (my_function (param1, param2, ...))

This way, the dataset returned by the table function is determined by values passed to the function at run-time, rather than at the time a view built around the table function was created.

In addition, in the FROM clause, you can also "blend" together  table function calls, tables and views, as in:

SELECT column_list 
  FROM TABLE (my_collection) t1,
       employees e,
       TABLE (another_collection) t2

'cause at that point, it (the dataset returned by a table function) is just the same as the dataset from a table or view. You can use ORDER BY, GROUP BY, etc. on the data returned from the function.

Prior to Oracle Database 12c Release 1, only nested tables and varrays could be consumed by the TABLE operator, and their types had to be defined at the schema level (CREATE TYPE) or in a package specification.

In 12.1 and higher, you can also use TABLE with integer-indexed associative arrays (also known as "IBIs" because of their INDEX BY clauses), one of my favorite 12.1 enhancements for PL/SQL.

By the way, I expect it is rather clear by now that you need to know about PL/SQL collections in order to use table functions. You don't need to necessarily know all the details, but here are some links to help you along:

Collections documentation

PL/SQL Channel videos on collections (5+ hours' worth!)

Use Cases

I am hoping that your eyes lit up at even the most basic presentation of this feature above, and your mind is spinning with ideas of how to  use table functions. To help you along, though, I offer the following list, all of which will be explored as we move through this series.

Merge session-specific data with data from tables

You've got data, and lots of it sitting in tables. But in your session (and not in any tables), you have some data - and you need to "merge" these two sources together in an SQL statement. In other words, you need the set-oriented power of SQL to get some answers.

With the TABLE operator, you can accomplish precisely that.

Programmatically construct a dataset to be passed as rows and columns to the host environment

Your webpage needs to display some data in a nice neat report. That data is, however, far from neat. In fact, you need to execute procedural code to construct the dataset. Sure, you could construct the data, insert into a table, and then SELECT from the table.

But with a table function, you can deliver that data immediately to the webpage, without any need for non-query DML.

Create (what is in effect) a parameterized view

One of my favorites, and arises (for me) directly from my work on the PL/SQL Challenge. We have lots of different ranking reports, based on different materialized views (but all very similar in their columns and the way the data is computed). Used to be, we created something like 25 different interactive reports in Application Express.

Then, when facing the need to enhance each and every one of those, we stepped back and looked for ways to avoid this repetitive mess. The answer lay in a table function. Since you call a function in the FROM clause, you can pass parameters to the function and therefore to the query itself. That flexibility made it possible to replace those 25 different reports with just 1 report, built on that "parameterized view."

Improve performance of parallelized queries (pipelined table functions)

Many data warehouse applications rely on Parallel Query to greatly improve performance of massive ETL operations. But if you execute a table function in the FROM clause, that query will serialize (blocked by the call to the function). Unless, unless....you define that function a a pipelined function and enable it for parallel execution.

Reduce consumption of Process Global Area (pipelined table functions)

Collections (which are constructed and returned by "normal" table functions) can consume an  awful lot of PGA (Process Global Area). But if you define that table function as pipelined, PGA consumption becomes a non-issue.

Do you know of others? Tell me in a comment on the post and I will add them to this post and give you credit (of course)!

Simple Examples 

I will close out this initial post of the series with some examples of the simplest sort of table functions: those that return a collection of scalar values - a list of strings, dates, numbers, etc.

Since a table function returns a collection, the type of collection must first be defined. Prior to 12.1, that type must be defined as a schema-level type, such as:

CREATE OR REPLACE TYPE list_of_names_t
   IS TABLE OF VARCHAR2 (100);
/

And that's all I need to have some fun. I will now define a function that returns the collection:

CREATE OR REPLACE FUNCTION my_family
   RETURN list_of_names_t
IS
   happyfamily   list_of_names_t
                    := list_of_names_t ('Veva',
                                        'Chris',
                                        'Lauren',
                                        'Loey',
                                        'Eli',
                                        'Steven');
BEGIN
   RETURN happyfamily;
END;
/

And then....I can SELECT FROM that function!

SELECT COLUMN_VALUE family_member
  FROM TABLE (my_family () )
/

FAMILY_MEMBER
-------------------
Veva
Chris
Lauren
Loey
Eli
Steven
Note that when you return a collection of scalars, Oracle automatically uses "COLUMN_VALUE" for the name of the column. You can change it to whatever you'd like with a column alias.
Starting with 12.1, you can also define a collection type returned by a table function in a package specification, and then reference that collection inside a TABLE operator (note one exception to this rule: use of the collection with pipelined table functions):

CREATE OR REPLACE PACKAGE plch_pkg
IS
   TYPE list_of_names_t IS TABLE OF VARCHAR2 (100);
END;
/

CREATE OR REPLACE FUNCTION my_family
   RETURN plch_pkg.list_of_names_t
IS
   /* Only works on 12.1 and higher */
   happyfamily plch_pkg.list_of_names_t
                    := plch_pkg.list_of_names_t ('Veva',
                                        'Chris',
                                        'Lauren',
                                        'Loey',
                                        'Eli',
                                        'Steven');
BEGIN
   RETURN happyfamily;
END;
/

DECLARE
   l   INTEGER;
   t   plch_pkg.list_of_names_t := my_family ();
BEGIN
   SELECT COUNT (*) INTO l FROM TABLE (t);

   DBMS_OUTPUT.put_line (l);
END;
/

Note, however, that as of 12.1, you cannot call the table function directly inside the TABLE operator. You must invoke it in PL/SQL, assign result to a variable, and then reference the variable inside TABLE. [thanks to Iudith Mentzel for reminding me of this point - see Comments]

And if you try this prior to 12.1, you will see this error:

ORA-21700: object does not exist or is marked for delete

So upgrade to Oracle Database 12c, already, willya? :-)

As I'd mentioned earlier, once you've stuck that table function inside TABLE inside FROM, well, it's a query like any other. You can create a view over it, you can join with other tables, use it with set operators, etc.:

CREATE VIEW family_members_v
IS
   SELECT COLUMN_VALUE family_member
     FROM TABLE (my_family () )
/

View created

DECLARE
   happyfamily list_of_names_t := 
      list_of_names_t ('Larry', 'Mark', 'Safra');
BEGIN
   FOR rec IN (  SELECT COLUMN_VALUE a_name
                   FROM TABLE (happyfamily)
                 UNION
                 SELECT last_name FROM employees
                  WHERE)
   LOOP
      DBMS_OUTPUT.put_line (rec.a_name);
   END LOOP;
END;
/

De Haan
King
Kochhar
Larry
Mark
Safra

DECLARE
   happyfamily list_of_names_t := 
      list_of_names_t ('Kingston', 'Bellman');
BEGIN
   FOR rec IN (  SELECT DISTINCT e.last_name
                   FROM TABLE (happyfamily) hf, employees e
                  WHERE hf.COLUMN_VALUE like e.last_name || '%')
   LOOP
      DBMS_OUTPUT.put_line (rec.last_name);
   END LOOP;
END;
/

King
Bell

I am not, by the way, claiming that the last example makes any sense. Just showing that you can do it.

And there you have it: a quick introduction, description of (some) use case, and a set of simple examples.

Next in series: Explore table functions that return more than one value per row (via object types).

Questions? Ask 'em below!
Complaints? State 'em below!
Suggestions for improvement? Let 'em rip!

And if you'd like to dive right in to more interesting examples, download my demo.zip and check out the files starting with "tabfunc".

Links to Table Function Series

Comments

  1. Hello Steven,

    Regarding the enhancement in 12c that allows a collection (nested table or associative array) defined in a package to be used as argument for the TABLE
    operator:
    This immediately reminded me of one of the quizzes of the Playoff of 2014/Q1,
    how else :) ?

    As by that quiz, even in 12c, we cannot use directly a FUNCTION that returns such
    a collection type inside the TABLE() operator.
    Instead, we should create a local collection variable and assign to it the result of the function, and then use that local variable as argument for the TABLE() operator.

    As by this result, the SELECT ... FROM TABLE(myfamily() )
    is supposed NOT to work in 12c, when myfamily() returns a package level collection
    type.

    Or, maybe I am missing something ?

    Unfortunately, I still don't have a 12c database to check :( :( :(

    Thanks a lot & Best Regards,
    Iudith Mentzel

    ReplyDelete
  2. Many thanks, Iudith. Yes, you are correct and I have applied that correction to the post. Thanks for your close reading and encyclopedic memory of the PL/SQL Challenge quizzes (plsqlchallenge.com).

    ReplyDelete
  3. You can get around the issue of "as of 12.1, you cannot call the table function directly inside the TABLE operator", and also use a table function with a package-declared type in 11g by using the PIPELINED functionality.

    For example, if I build another function that calls your "Only works in 12c" version of my_family like:

    CREATE OR REPLACE FUNCTION my_family_pl
    RETURN plch_pkg.list_of_names_t PIPELINED
    IS
    l_happyfamily plch_pkg.list_of_names_t := my_family();
    BEGIN
    FOR i in 1 .. l_happyfamily.COUNT
    LOOP
    PIPE ROW(l_happyfamily(i));
    END LOOP;
    -- Return to Indicate Pipeline complete
    RETURN;
    END;
    /

    I can then use it in SQL. Output below is from a 11.2.0.2 DB:

    SQL> select *
    from table(my_family_pl)

    COLUMN_VALUE
    --------------------------------------------------------------------------------
    Veva
    Chris
    Lauren
    Loey
    Eli
    Steven

    6 rows selected.

    ReplyDelete
    Replies
    1. Craig, formally you got around this restriction but at what cost? You ended up with a function that is declared as pipelined but isn't pipelined by nature.

      Why do we use pipelined functions? We do so to increase function performance. We construct collection element, pass it (via pipe row) back to SQL where

      client fetches and processes it while function is using that time constructing next row thus increasing function perfomance. Your function constructs

      collection first then reads element by element and pipes it back. Therefore, it will wait till collection is constructed in full and pipe row gives no

      advantage.
      I can't think of a meaningful example with associative array, so I'll use the following. Select distinct jobs from EMP:

      CREATE OR REPLACE PACKAGE plch_pkg
      IS
      TYPE list_of_names_t IS TABLE OF VARCHAR2 (100);
      TYPE list_of_names_ta IS TABLE OF VARCHAR2(100) INDEX BY VARCHAR2(100);
      END;
      /
      CREATE OR REPLACE
      FUNCTION distinct_jobs
      RETURN plch_pkg.list_of_names_t
      PIPELINED
      IS
      v_assocarray plch_pkg.list_of_names_ta;
      BEGIN
      FOR v_rec in (select job from emp) loop
      IF NOT v_assocarray.exists(v_rec.job)
      THEN
      -- complex logic could be here to justify PIPELINED
      v_assocarray(v_rec.job) := v_rec.job;
      PIPE ROW(v_assocarray(v_rec.job));
      END IF;
      END LOOP;
      END;
      /


      Solomon Yakobson.

      Delete
  4. Thanks, Craig. Great point! Yes, it is true, that even before 12.1, you could use package-based collection types - but only with pipelined table functions. A pipelined table function is a special type of table function and generally people blur the distinction between the two. And there are tradeoffs - one example: a "normal" TF can be called in FROM clause as well as in PL/SQL block. A pipelined TF can ONLY be called inside FROM clause.

    ReplyDelete
  5. Hello All,
    Just for "closing the circle" for readers that do not actively play the PL/SQL Challenge ( Steven, how come that such beings still exist ? ):
    When creating a pipelined function that returns a package level collection type,
    then, behind the scenes, Oracle creates schema level types with default names
    based on a pattern that includes the object_id of the package, for supporting the pipelined function.
    Adrian Billington in one of his excellent posts on the site indicated by Steven
    above, does explain both this behavior and also why he always does prefer to create
    his own schema level collection types for supporting pipelined functions.

    However, regarding the 12c enhancement, I still don't understand completely
    why a function call that returns a package level collection is NOT allowed as argument,
    for TABLE(), while a local variable of the same data type is allowed,
    which is exactly the opposite of the case of a pipelined function.

    I guess that it still has something to do with the general statement that still appears
    in the 12c PL/SQL Documentation:

    "In SQL contexts, you cannot use a function whose return type was declared in a package specification."

    And, if already at this point, I should mentioned the pl/sql challenge quiz of this week,
    which also points out some "weird behaviors" when "crossing the border" between
    PL/SQL and SQL constructs .... looks like Oracle still did not decided finally
    "to close these gaps", and make everything fully compatible between the two contexts.

    Thanks a lot & Best Regards,
    Iudith Mentzel

    ReplyDelete
  6. Hi Steven, they're incredibly useful tools. I used pipelined table functions to dynamically (or rather, programmatically) mutate my reporting data model to group by any given time frame and allow time period comparison across any pair of user-specified date ranges in OBIEE - something that is extremely difficult against a regular data warehouse star schema.

    The map-reduce pattern is also a really interesting use case, but I haven't look into it any further than a basic keyword analysis of message transcripts.

    Looking forward to reading more from you on this.

    ReplyDelete
    Replies
    1. @Phil Wilkinson That's an interesting use-case. I would be interested to read more about this and how you went about implementing in OBIEE.

      Delete
  7. Hi Steven,
    One of the issues with TABLE operator is inability to get collection element index value. Therefore we can't get guaranteed TABLE operator results in collection element order. I placed enhancement request to add COLUMN_INDEX pseudo column as COLUMN_VALUE counterpart. This was when TABLE operator was called THE and Oracle didn't have XML support. Since then Oracle added XML support which (yes, it is XML standard) supports FOR ORDINALITY to get "index" value however no index in TABLE operator.

    ReplyDelete
  8. Hello All,

    @Solomon, if I remember correctly, then I think that I have once read a thread
    on Asktom in which Tom Kyte explained that, most probably, the results of the TABLE operator will effectively always follow the collection element order, because there is too much internal functionality implemented inside Oracle
    that would become broken if this rule was violated.

    For pipelined functions this seems pretty obvious.

    Though, of course, since this feature is not documented, we cannot rely on it 100%.

    Relational purists could argue that, once you turn a collection into a relational table, there is no ordering (and no collection index) any more, and, instead,
    you can order the results using an ORDER BY in any way you wish.

    If original elements ordering is really important, then with some extra-work,
    the collection can be created as a collection of objects, with one object attribute containing the original index value.

    Best Regards,
    Iudith

    ReplyDelete
  9. Hi Steven,
    Can you please explain what do you mean by "that as of 12.1, you cannot call the tables function directly inside the TABLE operator. You must invoke it in PL/SQL, assign result to a variable, and then reference the variable inside TABLE" ?
    As per my undetanding we are doing the same thing by using the query select column_value from table(my_function()) in the very first example. Please correct me where my mistake is in understandingthe concept ?

    Regards,
    Swati

    ReplyDelete
  10. I say in this column:

    Prior to Oracle Database 12c Release 1, only nested tables and varrays could be consumed by the TABLE operator, and their types had to be defined at the schema level (CREATE TYPE) or in a package specification.

    So that example demonstrates syntax you can use with a nested table or varray, but not (yet) with an associative array.

    ReplyDelete
    Replies
    1. That is not true.
      "Starting with 12.1, you can also define a collection type returned by a table function in a package specification, and then reference that collection inside a TABLE operator:"

      It's possible at least in Oracle 11.2

      Delete
    2. Alex, yes, as discussed in other comments (and now clarified with a note on that sentence), this is possible prior to 12.2, but only for pipelined table functions. If you have an example demonstrating otherwise, please do share. And thanks for writing!

      Delete
  11. The following code run on Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

    CREATE OR REPLACE package TEST_pkg as
    type t_rec is record (
    a varchar2(10),
    b varchar2(10)
    );

    type t_tbl is table of t_rec;
    end;

    create or replace function foo return TEST_pkg.t_tbl
    pipelined
    as
    l_out TEST_pkg.t_rec;
    begin

    for i in (select 2+level a, 3+level b from dual connect by level <5) loop
    l_out.a := i.a;
    l_out.b := i.b;
    pipe row(l_out);
    end loop;
    end;
    /

    select * from table(foo);

    ReplyDelete
  12. pipeline also works for xmltype

    create type xml_tbl as table of xmltype;

    create or replace function foo return xml_tbl
    pipelined
    as
    begin

    pipe row (
    xmltype('


    Mary
    Jones

    415
    647-504-4546
    ') );

    pipe row (
    xmltype('


    John
    Doe

    344
    ') );

    pipe row (
    xmltype('


    Peter
    Pan

    216
    905-416-5004
    ')
    );

    end;
    /


    SELECT X.*
    FROM table(foo),
    XMLTABLE ('$d/employee' passing column_value as "d"
    COLUMNS
    empID INTEGER PATH '@id',
    firstname VARCHAR2(20) PATH 'name/first',
    lastname VARCHAR2(25) PATH 'name/last',
    office varchar2(10) PATH 'office',
    phone VARCHAR2(30) PATH 'phone') AS X;

    ReplyDelete
  13. As you said that the pipelined functions don't use memory to hold the data,
    they hold the data in an object type.if i used them to fetch the data in plsql table in 12c,where are this data hold?

    ReplyDelete
  14. Mostafa, with pipelined functions you do not use PGA memory to construct and pass back the array. You will, however, still be using SOME memory for the individual rows piped back (usually an object type instance). But the amount will be much, MUCH smaller and likely not something to worry about - at least in terms of session memory errors.

    Hope that helps, thanks for writing!

    ReplyDelete
  15. Thanks,
    But excuse me,i still can not see the difference between two cases.
    first case:- I declared a PL-SQL table and fetched a set of emp data in it using cursor with bulk collect keyword.

    Second case:= I did the same with the declared PL-SQL table but with using pipelined functions(12c) .

    Finally i have a PL-SQL table that hold a set emp data -that passed to it with two different ways-.
    The question is what situation with memory in the two cases with the PL-SQL table
    that received the same data with two different ways.
    I wish i could explained well in my post.
    Best wishes.

    ReplyDelete
  16. Ah, I think I see your point and why we are "talking past" each other a bit.

    If you call the pipelined table function (PTF) inside a PL/SQL block, and each row that is returned by the PTF is put into a collection...you are right. In the end, you will have used roughly the same amount of PGA.

    In contrast to non-pipelined TFs, PTFs can ONLY be executed from within a query. Generally the idea with PTFs is to deliver data to a host environment that is NOT PL/SQL, for example, a web page or other user-facing element.

    The data is constructed in PL/SQL, returned via SQL, and then consumed by Java or Javascript or you name it. So the PGA consumption "step" is skipped. Does that help clear things up?

    ReplyDelete
  17. hi Steven, can we create materialized view based on these table functions..if so , can you please let me know how to do it, i tried to create the materizalied view, but its giving error .. i tried to put it in a view and then called it in the materizalied view..but still couldnt get it to work, is this a restriction or something oracle hasnt built into yet...

    ReplyDelete
  18. Minnu, first, the standard reply: "giving error" - what error, what code did you actually try to run? Did you try to isolate the error?

    Second reply: OK, I tried it myself. Didn't have a problem. Check out my script on LiveSQL:

    https://livesql.oracle.com/apex/livesql/file/content_D4JAYX30R9P6TSAU0T07M7WF1.html

    ReplyDelete
  19. Well, you said "Complaints? State 'em below!" :)

    So I'm an Oracle newbie with a number of years of MS SQL Server experience. I can't shake the feeling that this is one of many examples where the Oracle powers that be decided to make things way more complicated than necessary.

    SQL Server Ex1.
    create function myFunction(@param int)
    returns table
    as
    return(
    select --whatever the heck you want to select
    );
    select * from myFunction(100);

    Ex2.
    create function myFunction2(@param int)
    returns @myTable table (col1 int, col2 varchar(100))
    as
    begin
    -- do whatever you want
    insert into @myTable(col1, col2)
    -- insert whatever you want
    return;
    end;
    select * from myTable t join myFunction(200) f on f.col1 = t.id;

    I keep trying to think of a justification for the extra complexity in Oracle (creating types, schema vs. package types, pipeline, etc.) and I'm struggling to find any. And yes of course I realize "Oracle isn't SQL Server". I guess I'm just looking for some context from an insider. Thanks!

    ReplyDelete
    Replies
    1. Not quite sure what to tell you, Anthony. Yes, without a doubt there are elements of T-SQL syntax that are simpler than PL/SQL's. And there are many features in PL/SQL which do not appear in T-SQL (packages, for example). I imagine that members of the PL/SQL dev team (of which I am not) would argue that PL/SQL is much more and more consistently structured. Sometimes the result is a bit more "complexity" than a corresponding T-SQL operations. Sometimes the result is an elegant and simple syntax. For example, the RESULT_CACHE feature. I don't always agree or like the path taken in terms of PL/SQL syntax, but I sure respect the deep and thorough attention they give to adding new features and syntax to the language.

      Delete
    2. Thanks Steven, I appreciate the reply. I've certainly had a few "Oh nice! I wish I could have done this in T-SQL!" moments, along with a number of "Come on, really!? I miss T-SQL." moments.
      That said, your articles and videos have proven to be immensely helpful and insightful. I owe you thanks for that as well!

      Delete
  20. Hello Sir,
    Good Afternoon,

    I created a Table Function, based of a simple Select Query.
    Please Find the Script in Steps
    --Step1. I created a Dummy table and Inserted some Data
    CREATE TABLE EMP
    (EMP_NO NUMBER(4) NOT NULL PRIMARY KEY,
    E_NAME VARCHAR2(10),
    JOB VARCHAR2(9),
    MGR NUMBER(4),
    HIRE_DATE DATE,
    SAL DECIMAL(7, 2),
    COMM DECIMAL(7, 2),
    DEPT_NO NUMBER(2));


    --SELECT * FROM EMP;

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7369, 'SMITH', 'CLERK', 7902, '17-DEC-1980', 800, NULL, 20);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7499, 'ALLEN', 'SALESMAN', 7698, '20-FEB-1981', 1600, 300, 30);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7521, 'WARD', 'SALESMAN', 7698, '22-FEB-1981', 1250, 500, 30);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7566, 'JONES', 'MANAGER', 7839, '02-APR-1981', 2975, NULL, 20);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7654, 'MARTIN', 'SALESMAN', 7698, '28-SEP-1981', 1250, 1400, 30);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7698, 'BLAKE', 'MANAGER', 7839, '01-MAY-1981', 2850, NULL, 30);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7782, 'CLARK', 'MANAGER', 7839, '09-JUN-1981', 2450, NULL, 10);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7788, 'SCOTT', 'ANALYST', 7566, '09-DEC-1982', 3000, NULL, 20);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7839, 'KING', 'PRESIDENT', NULL, '17-NOV-1981', 5000, NULL, 10);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7844, 'TURNER', 'SALESMAN', 7698, '08-SEP-1981', 1500, 0, 30);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7876, 'ADAMS', 'CLERK', 7788, '12-JAN-1983', 1100, NULL, 20);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7900, 'JAMES', 'CLERK', 7698, '03-DEC-1981', 950, NULL, 30);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7902, 'FORD', 'ANALYST', 7566, '03-DEC-1981', 3000, NULL, 20);

    INSERT INTO EMP
    (EMP_NO, E_NAME, JOB, MGR, HIRE_DATE, SAL, COMM, DEPT_NO)
    VALUES
    (7934, 'MILLER', 'CLERK', 7782, '23-JAN-1982', 1300, NULL, 10);

    --Step2. Created a Type in the Database, And then Created a Table of that Type
    CREATE TYPE typ_emp
    AS OBJECT (
    EMP_NO NUMBER(4),
    E_NAME VARCHAR2(10),
    JOB VARCHAR2(9),
    MGR NUMBER(4),
    HIRE_DATE DATE,
    SAL DECIMAL(7, 2),
    COMM DECIMAL(7, 2),
    DEPT_NO NUMBER(2)
    );

    CREATE TYPE tbl_emp IS TABLE OF typ_emp;

    Step3. Created a Table Function and Returned the Table Type
    CREATE OR REPLACE FUNCTION ecoa_procs.fn_emptest_ar RETURN ecoa_procs.tbl_emp
    AS
    dataoutp ecoa_procs.tbl_emp := ecoa_procs.tbl_emp();
    BEGIN

    FOR i IN (SELECT * FROM ECOA_PROCS.emp)
    LOOP
    dataoutp.extend;
    END LOOP;

    RETURN dataoutp;

    END fn_emptest_ar;
    /

    SELECT *
    FROM TABLE(fn_emptest_ar());

    The Function gets Executed, how ever it returns a Null or Empty Structure as Output i.e I can see the Expected Record Count in my case 10, but all the Rows are Empty or NULL.

    ReplyDelete
    Replies
    1. Anand, you are extending the nested table inside your loop, but you never actually assign any values to the newly created element. For example: dataoupt(dataoupt.last).mgr := i.mgr. If you do that, then you should see data in the array and in the select from the function.

      Delete
  21. Hi,
    i´ve used the "from table(" feature in a small utility-project .... "caching" of primary-keys. Nice to use in adhoc analysis, but intended to eliminate some sub-selects in hibernate-fetching:
    https://performanceaddict.blogspot.de/2017/04/those-sub-selects-part-2.html

    ReplyDelete

Post a Comment

Popular posts from this blog

Running out of PGA memory with MULTISET ops? Watch out for DISTINCT!

A PL/SQL team inside Oracle made excellent use of nested tables and MULTISET operators in SQL, blending data in tables with procedurally-generated datasets (nested tables).  All was going well when they hit the dreaded: ORA-04030: out of process memory when trying to allocate 2032 bytes  They asked for my help.  The error occurred on this SELECT: SELECT  *    FROM header_tab trx    WHERE (generated_ntab1 SUBMULTISET OF trx.column_ntab)       AND ((trx.column_ntab MULTISET             EXCEPT DISTINCT generated_ntab2) IS EMPTY) The problem is clearly related to the use of those nested tables. Now, there was clearly sufficient PGA for the nested tables themselves. So the problem was in executing the MULTISET-related functionality. We talked for a bit about dropping the use of nested tables and instead doing everything in SQL, to avoid the PGA error. That would, however require lots of work, revamping algorithms, ensuring correctness, you know the score. Then my eyes snagge

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p