User defined functions called from SQL

I was cc’d on an email yesterday morning that was sent out to the development group.  It was a pretty bluntly worded question asking why a monthly job was taking well over an hour to run. Time to investigate.

The issue was reproducible in our staging environment and generating a SQL trace was an no-brainer to take a look at what the problem was.  

After running the trace file through tkprof (all defaults) it was immediately apparent that the issue was with recursive statements. Recursive statements include all the SQL Oracle has to do in the background to parse incoming SQL. That overhead will vary depending on the type of parsing necessary but is unavoidable for newly created cursors. In addition, user defined PL/SQL functions called within SQL statements are also accounted for under the recursive statements summary.

Embedding user defined PL/SQL in a SQL statement or view definition has a number of disadvantages.

  • Any time you embed user defined PL/SQL calls within a SQL statement (or in this case a view definition) Oracle must perform a context switch between the SQL and PL/SQL engines. This implies CPU overhead.
  • If there are any performance problems embedded in the PL/SQL function, thos problems are inherited by the calling SQL or view.
  • PL/SQL is not part of the relational model. Anytime you do something none-relational in SQL you limit the opportunities for the Cost Based Optimizer to do some behind the scenes magic to improve the efficiency of the statement

  • Below is the summary information from tkprof.

    count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        2      0.01       0.01          0          7          0           0
    Execute      2      0.88       5.98          0      12145      74414           2
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        4      0.89       5.99          0      12152      74414           2
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse     2141     17.89      17.89          0          0          0           0
    Execute 13589173    492.64     489.04          5        180          0           0
    Fetch   13589083   4007.37    4035.89     778773  293428221        462    13589118
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total   27180397   4517.91    4542.83     778778  293428401        462    13589118

    13.5 million executions of recursive SQL statements.  Returning 13.5 million rows.  I always have mixed reactions when I see stuff like this. On the one hand it is frustrating that easily avoidable inefficiencies are so common. On the other hand these sorts of things are typically quite easy to fix and the improvement in response time often seems like a miracle to the end user which is always rewarding.

    Its my understanding that the application that produced this trace file was written under heavy deadlines and constantly changing requirements. I know, nothing new there. However, the developers seemed very fond of writing views on top of views on top of views. That wasn’t much of a problem when the data volume was relatively low but now that the application has been running for quite a few years it is not scaling very gracefully.

    Anyways, after seeing the recursive statement summary it was only a matter of identifying the user defined function and figuring out how to merge the logic of that function into a purely SQL solution. I went backup up the trace file and found the following SQL was executed nearly 13.5 million times.

            TRUNC(ENDDATE) >= :B1;

    It took me a little while to figure out where this was coming from but I eventually tracked it down to a view with a user defined function used to filter results for the current period.  A few emails between the developers and myself and a simple view re-write resulted in a several orders of magnitude in improvement in response time. It was fun creating a new trace file after the rewrite and emailing out a difference report. It was a good day.


    Check constraints and determinism

    In the same meeting I spoke of yesterday, the developer whose code we were reviewing had a check constraint on a year_of_birth column defined as ‘year_of_birth between 1900 and 9999’.  I suggested that the constraint offered no benefits.  I confidently suggested that what he really wanted was ‘year_of_birth <= extract(year from sysdate)’.  

    Later that afternoon I get an email stating that my suggestion resulted in an ORA-02436.  Oops.  It turns out that check constraints cannot contain calls to functions that are not deterministic. That makes sense. You wouldn’t want a value to pass a check constraint on its way into the database only to violate that constraint at a later time. You could never guarantee that all business rules explicitly set by the constraints were accurately enforced for all data in the database.

    Anyways, I responded with a list of three options.

    1. This was a very modestly sized APEX app.  Validating the data through APEX validations and calling it a day seemed perfectly reasonable.  The other two options provide no additional information to the optimizer that check constraints potentially do.  Sure you want data rules as close to the data as possible but this is so trivial that I don’t see it mattering much.
    2. Define a trigger that raises an exception if the :new value violates the rule.  I generally despise triggers.  I’ve wasted far too much of my life trying to track down the source of some unexplained event only to eventually discover it has something to do with triggers.  They complicate things far more often than they simplify them.  However, I think it is commonly acknowledged that a valid use for triggers is data validation that cannot be done declaratively.  In fact I think I’ll be hearing a presentation given by Toons Koppelaars making that very same claim at Hotsos this year.
    3. We just upgraded to 11G so you can play games with virtual columns but its really a kludge and I could imagine a day when the database kernel wises up to the trick in order to prevent the problems the restriction is meant to address.  I discouraged this.

    In the end, I think the developer is going to go with both 1 and 2.  That way you prevent unecessary submits back to the database and protect yourself against bad data getting in through some route other than the application.

    In case anyone wants to implement an non-deterministic function into a constraint here’s an example that you can tailor to your needs (I lifted this idea from here :

    create or replace function year_in_past( in_year in table.year_of_birth%type) return varchar2 deterministic is
    return case when in_year >= extract(year from sysdate) then ‘N’ else ‘Y’ end;
    end year_in_past;
    alter table table1   add (year_in_past_ind as (cast(year_in_past(year_in_past) as varchar2(1))));

    alter table table1 add constraint yip_ck1 check (year_in_past_ind = ‘Y’);

    This wouldn’t work if you wanted to ensure a year was set to some future value since, as we all know, time marches on and what once was the future always becomes the past.

    Check constraints: Not null implied?

    The following question was asked in a code review meeting this morning: Do check constraints imply not null? It was prompted by a column defined as not null that also had a check constraint to ensure the column only allowed a discreet set of values. Was the ‘not null’ part redundant?

    Embarrassingly, I didn’t know the answer. My guess was that a check constraint such as

    "column_name in ('val1','val2')"

    would imply the column could not contain null values and so the ‘not null’ piece of the table ddl was redundant. Easy enough to find out.

    INSERT INTO tst_chk(col, col2) VALUES ('T',NULL);
    >1 row created.
    INSERT INTO tst_chk(col, col2) VALUES ('T','C');
    >INSERT INTO tst_chk(col, col2) VALUES ('T','C');
    >ERROR at line 1:
    >ORA-02290: check constraint (DB_ADMIN.TST_CHK1) violated

    So, check constraints are only enforced when the column is actually populated with a value.

    • Categories

    • Archives

    • Meta