Skip to main content
Received this note from a PL/SQL developer today:

I have an idea to build the database of my application in such a way that all DML operations for the application are managed at the back end. There should be generic functions/procedures that will perform the DML operations. Also there should be functions developed that will provide the "Select" data. In short, I want to remove all sort of DML from front end and bring them to back end. In case of any DB structural changes, we should not require a large development effort. For example, if we have to add couple of columns to a table then it should be easily manageable/configurable in order to minimize the development effort cost.

This is the sort of note that makes me happy.

I have proposed for years to Oracle Database developers that we should see SQL as a service that is provided to us, not something we should write over and over and over again in our "high level" application code (as opposed to low level generic utilities like error loggers). 

After all, every SQL statement is a hard-coding just like every literal is a hard-coding.

And now I can all hear you saying, as my granddaughter is wont to say: "Whaaaaat?!"

Yes, every SQL statement is a hard-coding, and they are far worse and more dangerous than the hard- coding of literals.

In both cases, we make the assumption that this thing (a numeric value or a 6 way join) not only "works" (is accurate) today but will stay the same in the future.

Ha!

Ha, ha!

What a funny joke!

Of course it's all going to change. Now, if you hard-code the literal 76034 in your application in 5,000 places, no big deal, really. Just do a global search and replace in your code. 

There is very little chance you will change anything but the literal value of interest.

[Of course, little chance and no chance are two different things entirely. So when doing a GSAR, please do visually check each change. Years ago, right after I left Oracle, I worked as a consultant in the HQ of one of the biggest "fast food" chains in the world. I fixed a bug and "while I was at it" optimized some other code (or so I thought). Passed it on to QA, went into production on Sunday, and on Wednesday they had to roll back three days of production activity. Why? Because my GSAR had change "WHERE X = Y" into "WHERE X = X" - I won't bore you with the details or why the actual change didn't seem quite as stupid as that. The good news? I was not blamed. It was the QA team's fault. Ha! Ha, ha!]

But what if you repeat the same or very similar 6-way join in 3 or 5 places? How can you find and upgrade all those "old" SQL statements?

Bottom line: if you do not build a layer of code around your data, essentially around your SQL statements, you end up with an out of control, hard to optimize, impossible to test and maintain application. Yuch.

So use views to hide complex and often needed WHERE clauses. Build packages to hide transactions and queries and non-query DML operations.

And, ideally, you should be able to generate lots of this code. I have, in the past, build and offered really powerful code generators including the incredibly awfully named QNXO and its descendent, Quest CodeGen Utility. Neither, sadly, are available anymore. But I hope in the next year to offer a simpler substitute that could be used very flexibly.

In the meantime, the next time you start to write a SELECT directly inside your high-level code, stop and ask yourself: Might I or someone else need this elsewhere? Might I someday want to use the result cache feature on this query?"

Then build a package and put the query inside a function (or add it to an existing package). Return a record or a collection, as is appropriate. And call that function. Your code is more robust and you spend less time writing and debugging that code.

For example, don't do this:

CREATE OR REPLACE PROCEDURE do_stuff_with_employee (
   employee_id_in IN employees.employee_id%TYPE)
IS
   l_employee   employees%ROWTYPE;
   l_manager   employees%ROWTYPE;
BEGIN
   SELECT *
     INTO l_employee
     FROM employees
    WHERE employee_id = employee_id_in;
    
   /* Do some stuff... and then, again! */

   SELECT *
     INTO l_manager
     FROM employees
    WHERE employee_id = l_employee.manager_id;   
END;
/

do this:

CREATE OR REPLACE PACKAGE employees_sql
IS
   FUNCTION onerow (employee_id_in IN employees.employee_id%TYPE)
      RETURN employees%ROWTYPE;
END;
/

CREATE OR REPLACE PACKAGE BODY employees_sql
IS
   FUNCTION onerow (employee_id_in IN employees.employee_id%TYPE)
      RETURN employees%ROWTYPE
   IS
      onerow_rec   employees%ROWTYPE;
   BEGIN
      SELECT *
        INTO onerow_rec
        FROM employees
       WHERE employee_id = employee_id_in;

      RETURN onerow_rec;
   END;
END;
/
      
CREATE OR REPLACE PROCEDURE do_stuff_with_employee (
   employee_id_in IN employees.employee_id%type)
IS
   l_employee   employees%ROWTYPE;
   l_manager   employees%ROWTYPE;
BEGIN
   l_employee := employees_sql.onerow (employee_id_in);
    
   /* Do some stuff... and then, again, but now 
      just a function call! */

   l_manager := employees_sql.onerow (l_employee.manager_id);
END;
/

Comments

Popular posts from this blog

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p

The differences between deterministic and result cache features

 EVERY once in a while, a developer gets in touch with a question like this: I am confused about the exact difference between deterministic and result_cache. Do they have different application use cases? I have used deterministic feature in many functions which retrieve data from some lookup tables. Is it essential to replace these 'deterministic' key words with 'result_cache'?  So I thought I'd write a post about the differences between these two features. But first, let's make sure we all understand what it means for a function to be  deterministic. From Wikipedia : In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.  Another way of putting this is that a deterministic subprogram (procedure or function) has no side-effects. If you pass a certain set of arguments for the parameters, you will always get

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,