Over the years, we (the DB2 Community) have got used to more and more data in our databases and actually, more and more data in our meta-database (the DB2 Catalog).
Real Time Statistics (RTS)
In fact, the amount of information that was “of use”, took a quantum leap when the Real Time Statistics (RTS) tables were introduced back in DB2 V7 (As a set of post-GA APARs). But this new data source freed us from having to run a RUNSTATS to see when we need a REORG and also helped greatly with Image Copy and REORG decisions. When it was first announced; a lot of shops were worried about the overhead of this data but as DB2 collected the statistics internally, anyway the actual overhead is negligible – especially in relation to the benefit they give!
For more details about RTS usage, see one of my earlier Newsletters. Nearly all of the SOFTWARE ENGINEERING products use the RTS data in some way, even if it is just displaying the data on the screen “out of interest”.
Dynamic Statement Cache (DSC)
Then in DB2 V8 came the next jump forward – EXPLAIN STMTCACHE ALL (as long as your user id had SYSADM of course otherwise you only get your own statements back…). This enabled a look inside the Dynamic Statement Cache (DSC) that before had been hidden behind the veil of IFCIDs. Now to really get useful information you *must* have started these two traces.
– START TRACE(PERFM) CLASS(31) IFCID(318)
– START TRACE(ACCTG) CLASS(3)
And then waited for a “representative amount of data to be accumulated” – Is that one minute, one day, one month? Again the overhead plays a major role but all I have seen is “between 2% and 4%” which for me is low enough to justify having these always switched “on”. Of course, the data that is returned is priceless and more than outweighs the overhead. Nevertheless, I wrote a newsletter all about the DSC.
What I have seen, is that the “Hit Ratio” of the DSC is basically a worthless metric. I have been to shops where their hit ratio was 99% + but actually the DSC was flushing statements out at a rate of over 20,000 per hour! Yes, that is not a typo, 20K statements/hour ! The “problem” is that if you have say 4,000 statements in the cache (Which is normal for a cache size of about 150,000Kb by the way) and you imagine that one of the SQLs is executed 16,000,000 times and all the rest are flushed you still have a “hit ratio” of nearly 100%! The better metric is your “flush per hour” which you should try and reduce to less than 4,000 if you can…
Remember that literals kill the DSC – and JAVA developers kill it for fun!
To squeeze even more out of the DSC you must make sure it is as big as a house – Set it to 320.000 Kb if you can! Make sure all literals are used only when absolutely needed! Check all special register usage and any other reasons why the DSC was not correctly used. Our new SQL WorkloadExpert does all this and more for you of course.
The new Static Statement Cache (SSC)
Having slipped in a mention of our new product, from now on called WLX, we can head on into the new world introduced since DB2 10 NF and that is what I call the Static Statement Cache (SSC) – IBM call it static SQL in the EDMPOOL but I find SSC a *much* better name!
This new addition to the “very cheap but great data” from DB2 is the final piece in a large, complex jigsaw puzzle. What it does is treat static SQL exactly the same as dynamic SQL so you now have performance metrics and data in the same format and style as the DSC – IBM have not done an “EXPLAIN SSC ALL” – you still have to write mainframe assembler to get the data in the IFCIDs but that is why we are here! We write the assembler, the high speed STCs and the GUI front end so you do not have to worry about all that!
Build your own SQL Performance Warehouse
Further IBM added an enhancement to the DSC (and also in the new SSC) which means that flushed statements are now *also* thrown as an IFCID – this is fantastic news!
It simply means that you no longer have “invisible” SQL on your machine – All executed SQL is now freely available with its performance metrics. This is great news for SQL tuners and performance managers. You can now finally really see what is running on the machine, when it runs, who runs it, what it costs, how it adds to the 4 hour rolling average etc. Then you can analyze the data to find trends and bottlenecks and areas for tuning that up until this time were not even known to be there! There is no finger pointing here. The data simply did not exist before so no-one could see which possible benefits existed.
So now you can build your own SQL Performance Warehousefor long term analysis and tuning which should contain:
RTS data – Object list, When was the object last REORGed, How many rows?, etc.
DSC data – Which SQLs are run? How bad are they? Can I make them “better”?, etc.
SSC data – Which packages are causing me pain? Which SQLs are never run?, etc.
Further you can add a cross-reference to objects to get object level usage statistics for all your tables, indexes, packages, collections etc. which can also be used for an application level analysis which then leads to one of the great things which is a “before and after comparison”. To explain this, think about the following scenario:
Find out the “bad guy”
You have a “bad guy” SQL that would really fly if a new index was created.
The problem is “Will this new index help this one query but cripple all of the rest?” and of course you never knew!
Now you can find it out by simply gathering all the data for, say, a week then create the index with runstats etc. and then wait another week while collecting data. Once that week is done, simply compare all of the SQL that had anything to do with that table and see how the CPU, IO and Elapsed times compare. If the overall average CPU goes down then you can conclude it was a winner! However if it has gone up – then you might as well drop the index and think again…
All of this data is also great for charting, graphing and reporting in speedometers, barometers, histograms, pie charts and radar diagrams, which raises the awareness of SQL Workload to the management level in a really good visual way.
I hope from this brief introduction to the topic of the “new” SSC and enhanced DSC that it has awakened your interest in the topic – I am really excited about all this data (but then again I am a geek!)
As always if you have any questions or comments please email me.