Thursday, January 26, 2012

An Oracle Tuning Exercise Followup



After my last post, I took some good-natured heat from my colleagues for avoiding the deep, probing look into tuning for which  they all seem to yearn.  I believe they missed my point - that the exercise itself must be defined before you can begin to search for problems.  Lo and behold, I have again been asked to do a tuning assessment, and I will be doing yet another assignment to evaluate a client database for performance.  This one I know much less about, so it's even more important that a professional approach be taken.  Hopefully I will post about that project sometime soon.


In the meantime, I thought it only fair that I follow-up on my prior post, since so many people have asked about the result.  In summary, we had several queries which showed up as high-resource consumers in an AWR (automatic warehouse repository).  Their purpose was to show critical measurement metrics for a business, using some complex algorithms and computations.  While one would think the computations were the problem, I assure you that with today's modern hardware and software, they seldom are.


What did happen was that a parameter being passed to the queries, as a bind variable, was defined as a varchar(2) string in the database, yet the application was passing a number.  Because of this datatype difference, an index was not used, and several full table scans per query resulted.  This can be a common problem, but is often difficult to spot.   When running an "explain plan" on the queries with the bind variables in place, it showed that the indices would be used.  An AWR  listing will only show the bind variable representations (:1, :2, :3 etc) as well.  To see the actual values of these variable, you must capture the query in memory, and then query dictionary view v$sql_bind_capture to see the actual values being used.  When we actually caught the queries running live in the system, and detected the actual bind variable values, we saw that they were not quoted strings, but numbers.


For example, a query on this simple table....

create table test_table
(pk_col varchar2(5));

With the following (indexed) primary key....

alter table test_table add primary key  (pk_col);

Such as....

select *
from test_table
where primary_key = 5;

...would do a full table scan!


The reason is that the number 5 is not a character string, and the column is defined as a character string.  This particular set of data was zero filling all numeric values.  If there is a row in the table with a value of "00005", Oracle will do the interpretation and pass the number back, but has to examine and compare each row to do so.  If there were millions of rows, the results would not be returned quickly!

The best solution for this, it was decided, was to change the application code so that a 5-position quoted string was always passed in the variable.  Sometimes changing the code is not an option.  In such a case, another way to go would have been to create a "function-based index", such as:

create index abcidx on test_table to_char(lpad(pk_col,5,'0'));

This index, on the same column, uses two oracle FUNCTIONS to assure the resulting compare field is:
a) Zero-filled to the left and,
b) Always five character positions.

This will result in an index being used.


So much for the details on my little encounter.  I am all about keeping things as simple and understandable as possible when discussing the most highly technical of subjects.  In this case, luckily, I could!   There were also a slew of Oracle parameters that we will be changing to aid performance, and I made those recommendations to those in charge to implement, but they are quite dry, involve SGA size, and are outside the scope of my discussion for today.  When the time comes, we will discuss removing the nasty AMM (automatic memory management) from your Oracle database, as it almost never helps at all!