Saint Louis Db2 Users Group – June 2018

STLDUG – Saint Louis Db2 Users Group, MO, USA – June 2018

SEGUS & SOFTWARE ENGINEERING present:

Total environment simulation – Workload Replay in an agile world

> Pdf Presentation

Is your production cloning for development/testing a quick and fully automated procedure? Does it consider differences of the underlying hardware and does it allow to choose and automatically run SQL workloads? Maybe your Db2 cloning needs an upgrade! By using a highly customizable XML script and by exploiting virtualization features we can make a clone YOUR clone and even make it behave like production. However, most importantly SQL Capture/Replay fills the gap of automatically running your production workload for testing. It captures production SQL 24/7 by exploiting highly efficient IFCID-via-OPx technology and it filters and prepares statements for replay. This enables shops to capture any SQL being ever executed and keep them in sets of workloads to be selected for testing. As an integrated part of Db2 cloning, this adds up to a comprehensive cloning solution, not only for objects and data, but even for production workloads. Learn how to integrate individual task for flexible automation and how to choose from four different levels of testing. The levels include simple anomaly alerting, access path verification, cloning and workload capture/replay to easily discover different behavior resulting from changes, like new application releases, or environment upgrades. Find out how easily you can pick and choose from KPI based test automation.


More about Db2 Continuous Delivery – CD

Presentation Outline

  1. Testing, virtualizing and simulating – the aspects of reliable quality assurance
  2. Db2 database/object cloning – what’s state of the art and what’s beyond
  3. XML commander – The comprehensive automation of flexibility, covering
    • FTP/Routing
    • JCL
    • ISPF file tailoring, panels, messages
    • Db2 Commands
  4. Emphasizing the Data – what has changed since July and what are the hybrid aspects managing Data
    – Anomaly alerting based on Incompatibility Change Indicators (ICIs)
    – Dyn./Stat.Access Path Change Detection e.g.via Plan Management
    – Cloning exploiting Backup System
    – Workload-KPI verification using SQL replay and KPI comparison
  5. Top 10 list of tips and gotchas from a year of Db2 12 production Data bases around the world

Audience Experience:   Intermediate Advanced
Platform:                         Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration, Performance Management, Db2 Migration

Speaker biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in Db2 operations and performance tuning, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under realworld recovery pressure. His activities cover EMEA, as well as North America through SE’s U.S. subsidiary, SEGUS Inc. As a member of SE’s Request Board he’s working closely with customers and the development labs.

 

Baltimore Washington Db2 Users Group – June 2018

BWDb2UG – Baltimore Washington Db2 Users Group, MD, USA – June 2018

SEGUS & SOFTWARE ENGINEERING present:

Db2 12 – 12 months agile production experience with more focus on the Data and less on the base

> Pdf Presentation

Migrating to Db2 12 is pretty much the same procedure like every 3 years. Then we get introduced to a variety of levels that control new features and functions being delivered and realize that the every-3-years-same-procedure is history. The levels, their dependencies and behavior are a fundamental change, which was implemented to control agile and more specifically it’s associated Continuous Delivery of future upgrades and fixes. As usual, it comes with benefits, but also challenges.

This presentation not only covers customers experiences migrating to Db2 12, but also how the changed maintenance approach has forced them to change existing procedures.

The second part is the daily operation and exploitation using Db2 12 enhancements from customers of different industries.

The over-all results aren’t bad, but a major conclusion is also that this Db2 version is significantly different than other versions before. It’s the first agile version with all its pros and cons. According to IBMs July 2017 announcement, the new database design “represents the elemental nature of Db2 (think periodic table) and connotes the fundamental importance of hybrid data management.” Where does this materialize in our daily work with Db2? Join this presentation and benefit from the experience of other Db2 customers around the world running Db2 12.


More about Db2 Continuous Delivery – CD

Presentation Outline

  1. a – Db2 migration and maintenance – meet the agile and continuous delivery face of Db2.
    b – Db2 Code, Catalog, Function and Application Levels – differences, dependencies and how to successfully manage them.
  2. Running Db2 12 and exploiting new features – what’s hot and how to exploit it best?
  3. Emphasizing the Data – what has changed since July and what are the hybrid aspects managing Data?
  4. Top 10 list of tips and gotchas from a year of Db2 12 production Data bases around the world

Audience Experience:   Intermediate Advanced
Platform:                         Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration, Performance Management, Db2 Migration

Speaker biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in Db2 operations and performance tuning, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under realworld recovery pressure. His activities cover EMEA, as well as North America through SE’s U.S. subsidiary, SEGUS Inc. As a member of SE’s Request Board he’s working closely with customers and the development labs.

Central Canada DB2 Users Group 2018

CCDB2UG – Central Canada DB2 Users Group 2018, Toronto, ON, Canada – June 4-5, 2018

SEGUS & SOFTWARE ENGINEERING present:

Db2 12 – 12 months agile production experience with more focus on the Data and less on the base

> Pdf Presentation

Migrating to Db2 12 is pretty much the same procedure like every 3 years. Then we get introduced to a variety of levels that control new features and functions being delivered and realize that the every-3-years-same-procedure is history. The levels, their dependencies and behavior are a fundamental change, which was implemented to control agile and more specifically it’s associated Continuous Delivery of future upgrades and fixes. As usual, it comes with benefits, but also challenges.

This presentation not only covers customers experiences migrating to Db2 12, but also how the changed maintenance approach has forced them to change existing procedures.

The second part is the daily operation and exploitation using Db2 12 enhancements from customers of different industries.

The over-all results aren’t bad, but a major conclusion is also that this Db2 version is significantly different than other versions before. It’s the first agile version with all its pros and cons. According to IBMs July 2017 announcement, the new database design “represents the elemental nature of Db2 (think periodic table) and connotes the fundamental importance of hybrid data management.” Where does this materialize in our daily work with Db2? Join this presentation and benefit from the experience of other Db2 customers around the world running Db2 12.


More about Db2 Continuous Delivery – CD

Presentation Outline

  1. a – Db2 migration and maintenance – meet the agile and continuous delivery face of Db2.
    b – Db2 Code, Catalog, Function and Application Levels – differences, dependencies and how to successfully manage them.
  2. Running Db2 12 and exploiting new features – what’s hot and how to exploit it best?
  3. Emphasizing the Data – what has changed since July and what are the hybrid aspects managing Data?
  4. Top 10 list of tips and gotchas from a year of Db2 12 production Data bases around the world

Audience Experience:   Intermediate Advanced
Platform:                         Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration, Performance Management, Db2 Migration

Speaker biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in Db2 operations and performance tuning, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under realworld recovery pressure. His activities cover EMEA, as well as North America through SE’s U.S. subsidiary, SEGUS Inc. As a member of SE’s Request Board he’s working closely with customers and the development labs.

2018-03 RTS RUNSTATS

 „Breaking the rules is often fun, although we cannot condone it. But breaking the rules of Real Time Statistics (RTS) in Db2 12 can really land you in hot water.

We provide two queries that may give you a Get Out of Jail Free Card – at least as far as RTS is concerned…”

In this short newsletter, I wish to briefly discuss a change in behavior of the Real Time Statistics (RTS) tables that I have now seen at customer sites. I am not too happy about it! Before we start, a quick resumé of the RTS is required.

The Arrival of RTS

The Real Time Statistics were basically created by Dr Jim Teng in Db2 V7 and have always obeyed a few quite simple rules.


Real Time Statistics (RTS) rules:


  1. Rule Number one
    If in doubt set to NULL.
    This might read a bit odd but the idea from Jim was that if any value is not 100% known then the column must be set to the NULL value.


  2. Rule Number two
    Externalize when asked, or by timer.


  3. Rule Number three
    No DEFINE NO data.


  4. Rule Number four
    Utilities will always update the RTS unless it cannot – see Rule Number one.


  5. Rule Number five
    SQL updates the counters unless they cannot – Think Mass Delete in a multi-table tablespace. Totalrows cannot be updated.

Rules are made to be broken

A couple of years into RTS usage and the clamor for changing various insert values got too big, so IBM enhanced the RTS so that on creation the REORGLASTTIME got set to the created timestamp and all the counter columns got set to zero instead of NULL. Now this was a good change as adding 1,000,000 to NULL you get NULL, whereas adding 1,000,000 to zero means you get 1,000,000 which is obviously better for working out whether or not you need a utility to run. The REORGLASTTIME was also accepted as when you create a nice empty object or you insert one row into an object, then by definition, it is in a perfectly reorganized state!

RTS rule one broken

Mass delete always caused problems, as mentioned earlier, so IBM then made a “half way” fix for the INDEX statistics in Db2 9 by zeroing the TOTALENTRIES when there is a Mass Delete as Db2 knows that the index is now empty.

(PM34730: RTS SYSINDEXSPACESTATS TOTALENTRIES INCORRECT AFTER MASS DELETE.)

Of course it did not update the TOTALROWS, as it didn’t “know” how many rows were mass deleted or truncated. This causes “drift” between the TS and IX statistics, but is only a minor annoyance.

Db2 12 breaks rule one

Now in Db2 12 Rule one has been broken again. Not that much of a break, but still not brilliant! What they have done, is change the STATSLASTTIME to now also be, nearly, the created TS – as if creating an object sets the runstats columns to valid data!

Naturally, the Db2 Catalog is still all -1’s. This makes generating utilities based on the RTS a little bit tricky, as you cannot trust the STATSLASTTIME to now ever actually be the time a RUNSTATS utility really ran – which was the *whole*, and only, purpose of this column! If only they had set the STATSLASTTIME to ‘0001-01-01-00.00.00.00.000000’ then all would be well…

Who woke the dogs up? (Or: Who let the dogs out? I.e. American jargon)

PI79234: SYSIBM.SYSTABLESPACESTATS.STATSINSERTS IS NOT UPDATED BY RTS EXTERNALIZATION SINCE OBJECT IS CREATED.
Is the APAR (PTF UI48494) that caused me all the trouble.

Where’s the beef?

Well, the problem is, if you are generating RUNSTATS based on the RTS – and who isn’t these days? – Then you are probably using this column. Now it *looks* like a RUNSTATS has been run at least once.

This is false and can lead to the not running of RUNSTATS when it most definitely should have been run.

Check all your home-grown RUNSTATS checkers. Remember DSNACCOX is also a little bit broken, as it uses these predicates:

(STATSLASTTIME IS NULL OR
 STATSLASTTIME<LOADRLASTTIME OR
 STATSLASTTIME<REORGLASTTIME OR
 STATSLASTTIME< latest PROFILE_UPDATE for the table space1 OR

Make sure you do not use the STATSLASTTIME, but instead join across to the SYSINDEXPART or SYSTABLEPART and pull out the STATSTIME column. This data is still ok and not fake news!

The scope of the problem

To find out the scope of the problem at your site, or even if you have this problem, you can run these two queries which simply list out all the objects that have, according to the Db2 Catalog, never been RUNSTATted, but according to the RTS they have been RUNSTATted:

Query 1

-- SELECT LIST OF TABLESPACES THAT ACCORDING TO DB2 CATALOG HAVE NOT 
-- BEEN RUNSTATTED BUT ACCORDING TO RTS HAVE BEEN.  
-- IGNORE DSNDB01 AS NO RUNSTATS FOR DIRECTORY OBJECTS 
-- IGNORE WORK DEFINED DATABASES AS NO RUNSTATS FOR THESE 
-- ONLY CHECK FOR TABLE TYPES H,M,P,R AND T 
-- DISTINCT IT FOR MULTI TABLE TABLESPACES 
SELECT DISTINCT A.DBNAME, A.NAME, A.PARTITION 
      ,A.REORGLASTTIME, A.STATSLASTTIME 
      ,B.STATSTIME, B.CREATEDTS 
FROM SYSIBM.SYSTABLESPACESTATS A 
   ,SYSIBM.SYSTABLEPART        B 
   ,SYSIBM.SYSDATABASE         C 
   ,SYSIBM.SYSTABLES           D 
WHERE B.STATSTIME = TIMESTAMP('0001-01-01-00.00.00.000000') 
  AND NOT COALESCE(A.STATSLASTTIME 
                   ,TIMESTAMP('0001-01-01-00.00.00.000000')) 
                  = TIMESTAMP('0001-01-01-00.00.00.000000') 
  AND NOT A.DBNAME = 'DSNDB01' 
  AND A.DBNAME = C.NAME 
  AND NOT C.TYPE = 'W' 
  AND A.DBNAME = B.DBNAME 
  AND A.NAME = B.TSNAME 
  AND A.PARTITION = B.PARTITION 
  AND A.DBNAME = D.DBNAME 
  AND A.NAME = D.TSNAME 
  AND D.TYPE IN ('H' , 'M' , 'P' , 'R' , 'T') 
ORDER BY 1 , 2 , 3 
; 

Query 2

-- SELECT LIST OF INDEXSPACES THAT ACCORDING TO DB2 CATALOG HAVE NOT 
-- BEEN RUNSTATTED BUT ACCORDING TO RTS HAVE BEEN. 
-- IGNORE DSNDB01 AS NO RUNSTATS FOR DIRECTORY OBJECTS 
-- IGNORE HASH INDEXES AS NO RUNSTATS ALLOWED 
-- ONLY CHECK FOR TABLE TYPES H,M,P,R AND T 
SELECT A.DBNAME, A.INDEXSPACE, A.PARTITION 
      ,A.REORGLASTTIME, A.STATSLASTTIME 
      ,B.STATSTIME, B.CREATEDTS 
FROM SYSIBM.SYSINDEXSPACESTATS A 
    ,SYSIBM.SYSINDEXPART       B 
    ,SYSIBM.SYSINDEXES         C 
    ,SYSIBM.SYSTABLES          D 
WHERE B.STATSTIME = TIMESTAMP('0001-01-01-00.00.00.000000') 
  AND NOT COALESCE(A.STATSLASTTIME 
                  ,TIMESTAMP('0001-01-01-00.00.00.000000')) 
                  = TIMESTAMP('0001-01-01-00.00.00.000000') 
  AND NOT A.DBNAME = 'DSNDB01' 
  AND A.CREATOR = B.IXCREATOR 
  AND A.NAME = B.IXNAME 
  AND A.PARTITION = B.PARTITION 
  AND A.CREATOR = C.CREATOR 
  AND A.NAME = C.NAME 
  AND NOT C.HASH = 'Y' 
  AND C.TBCREATOR = D.CREATOR 
  AND C.TBNAME = D.NAME 
  AND D.TYPE IN ('H' , 'M' , 'P' , 'R' , 'T') 
ORDER BY 1 , 2 , 3 -
;

You can run these in any version of Db2 you like, but you will only get results from a Db2 12 system with this APAR applied and you have created an index or a tablespace. When I run them here on my Db2 10 and 11 systems I get zero rows back and on my test Db2 12 FL501 system I get 172 rows.

The good news is…

Our product for helping you all get “agile” in Db2 12, CDDC – ContinuousDelivery DeploymentCheck, detects and reports these data constellations as does our Statistics Health Check, of course.

I would be very interested to hear your opinions about this “little change in behavior”. Have you come across this at your site?

 

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

2018-02 Db2 Catalog Statistics revisited

Db2 Optimizer & access path selection for Db2 11 & Db2 12 :

Db2 Catalog dataProblematic default values | Correlations in the Db2 Catalog 

It has been six years since the last update so I thought, after Terry Purcell’s excellent presentation in January 2018, it would be a good point in time to go over and rake the coals again—especially as a couple of things have changed for Db2 12!


Terry Purcell – Db2 12 for z/OS Optimizer and RUNSTATS improvements
Webcast replay          Abstract

Are you a RUNSTATS Master?

Every now and again, I hold a little presentation called “Are you a RUNSTATS Master?” Actually these days it’s called “Db2 z/OS Lies, Damn Lies, and Statistics…” where I describe in detail, what the Db2 Optimizer uses for access path selection in relation to the Db2 Catalog data.

Surprised? You will be!

Personally, I am always surprised at how often people say “just that data?” or “is that it?” (the various other reasons for access path selection like CP speed, RID Pool size, Sort Pool size, Max data caching size, and, of course, the 80 bufferpools are also mentioned, but these have nothing to do with RUNSTATS).

So generally the answer is “Yes.” However, the permutations and combinations make the devil in the detail – The Db2 Optimizer’s algorithms are top secret, but the input data it uses is fully described in the documentation.

Just the facts ma’am

What I want to do, is show :

  • the Db2 Catalog data that is used
  • the default values that can cause surprising things to happen
  • the problem of correlations in the Db2 Catalog

Which data are used by the Db2 Optimizer and which are updated by RUNSTATS?

Here is a complete list of the eleven tables used by the Db2 Optimizer:

  1. SYSIBM.SYSCOLDIST
  2. SYSIBM.SYSCOLSTATS *
  3. SYSIBM.SYSCOLUMNS
  4. SYSIBM.SYSINDEXES
  5. SYSIBM.SYSINDEXPART
  6. SYSIBM.SYSKEYTARGETS (same as SYSCOLUMNS)
  7. SYSIBM.SYSKEYTGTDIST (same as SYSCOLDIST)
  8. SYSIBM.SYSROUTINES
  9. SYSIBM.SYSTABLES
  10. SYSIBM.SYSTABLESPACE
  11. SYSIBM.SYSTABSTATS

* degree of parallelism only and, after APAR PK62804, also „sometimes“ used to bound filter factor estimates…

By the Columns

Now we can also list out all of the columns (obviously not including the key columns) which are used by the Db2 Optimizer:


SYSCOLDIST
CARDF, COLGROUPCOLNO, COLVALUE, FREQUENCYF, HIGHVALUE, LOWVALUE, NUMCOLUMNS, QUANTILENO, STATSTIME


SYSCOLSTATS
COLCARD, HIGHKEY, LOWKEY


SYSCOLUMNS
COLCARDF, HIGH2KEY, LOW2KEY


SYSINDEXES
CLUSTERING*, CLUSTERRATIO, CLUSTERRATIOF, DATAREPEATFACTORF, FIRSTKEYCARDF, FULLKEYCARDF, NLEAF, NLEVELS


SYSINDEXPART
LIMITKEY*


SYSKEYTARGETS
CARDF, HIGH2KEY, LOW2KEY, STATS_FORMAT


SYSKEYTGTDIST
CARDF, KEYGROUPKEYNO, KEYVALUE, FREQUENCYF, HIGHVALUE, LOWVALUE, NUMKEYS, QUANTILENO, STATSTIME


SYSROUTINES
CARDINALITY*, INITIAL_INSTS*, INITIAL_IOS*, INSTS_PER_INVOC*, IOS_PER_INVOC*


SYSTABLES
CARDF, EDPROC*, NPAGES, NPAGESF, PCTROWCOMP


SYSTABLESPACE
NACTIVE, NACTIVEF


SYSTABSTATS
CARD, CARDF, NPAGES


Notes: * Columns are not updated by RUNSTATS and _ Columns are not updatable at all. The column STATSTIME is used only if there are duplicates in the SYSCOLDIST table, and then the Db2 Optimizer will use the “newer” data that was probably inserted by a User.

Know your defaults

Which default column values trigger the Db2 Optimizer to use its own internal default values?


SYSCOLUMNS


If COLCARDF                       = -1 then use 25


SYSINDEXES


If CLUSTERRATIOF            = 0 then use CLUSTERRATIO


If CLUSTERRATIO              = 0 then use 0.95 if the index is CLUSTERing = ‘Y’ otherwise 0.00


DATAREPEATFACTORF    = -1 then is ignored


If FIRSTKEYCARDF            = -1 then use 25


If FULLKEYCARDF             = -1 then use 25


If NLEAF                              = -1 then use 33 (Which is SYSTABLES.CARDF / 300)


If NLEVELS                         = -1 then use 2


SYSROUTINES


If CARDINALITY                  = -1 then use 10,000  


If INITIAL_INSTS                 = -1 then use 40,000


If INITIAL_IOS                     = -1 then use 0


If INSTS_PER_INVOC        = -1 then use 4,000


If IOS_PER_INVOC            = -1 then use 0


If IOS_PER_INVOC            = -1 then use 0


SYSTABLES


If CARDF                             = -1 then use 10,000


If NPAGESF                      <= 0 then use NPAGES


If NPAGES                          = -1 then use 501 (Which is CEILING (1 + SYSTABLES.CARDF / 20))

Here you must be very careful if using NPGTHRSH ZPARM as 501 is more than the default value in most shops. This is one of the little changes in Db2 12 where the value -1 is treated as -1 for the NPGTHRSH check.


SYSTABLESPACE


If NACTIVEF                     = 0 then use NACTIVE


If NACTIVE                       = 0 then use 501 (Which is CEILING (1 + SYSTABLES.CARDF / 20))


SYSTABSTATS


If CARDF                         = -1 then use 10,000


If NPAGES                       = -1 then use 501 (Which is CEILING (1 + SYSTABSTATS.CARDF / 20))


So now you can see that non-floating point “old” data, may still be used today and this may cause access path headaches!

Never ever say never

Now to top it all, the data in the SYSCOLDIST and SYSKEYTGTDIST never gets simply “deleted”.

Well, actually, in Db2 12 you can now do a RUNSTATS xxx.yyy RESET ACCESSPATH to delete all SYSCOLDIST and SYSKEYTGTDIST data and set all other relevant columns to their respective defaults, but you must time this RUNSTATS very wisely! If you run it and then forget to do a normal full RUNSTATS…

Oldie but a goldie

Once the data are inserted, they stay there, until they are overwritten by new data, a RUNSTATS RESET, or the object is dropped. This all leads to some very old data in these two tables that can and do cause the Db2 Optimizer a ton of grief! One of the first things I do is to simply select the MIN(STATSTIME) from these tables just to see how old the data really is. Do it yourself and be surprised! I have seen sites with eight-year old data in the SYSCOLDIST and that cannot be good!

Correlate the world

Now onto correlations… There are lots of little tricks that DBAs use to “massage” access path choice. One of these, is to just set NLEVELS to 15 for a given index. Then lots of queries simply refuse to touch it as it would appear to be HUGE. Now, just simply updating columns can cause the Db2 Optimizer, in the best case, to ignore the updates or, perhaps, makes things even worse! So here is a list of the correlations (In other words, if you change xxx remember to change yyy and zzz as well):

Relationships exist among certain columns of certain tables:

  •       Columns within SYSCOLUMNS
  •       Columns in the tables SYSCOLUMNS and SYSINDEXES
  •       Columns in the tables SYSCOLUMNS and SYSCOLDIST
  •       Columns in the tables SYSCOLUMNS, SYSCOLDIST, and SYSINDEXES

 If you plan to update some values, keep in mind the following correlations:

  •  COLCARDF and FIRSTKEYCARDF/FULLKEYCARDF (They must be equal for the 1st column and full, if a single column index)
  •  COLCARDF, LOW2KEY and HIGH2KEY. (For non-default COLCARDF LOW2KEY and HIGH2KEY key must be filled with data) and if the COLCARDF is 1 or 2 Db2 uses LOW2KEY and HIGH2KEY as domain statistics to generate frequencies.
  • CARDF in SYSCOLDIST.  CARDF is related to COLCARDF and FIRSTKEYCARDF and FULLKEYCARDF. It must be at a minimum
  • A value between FIRSTKEYCARDF and FULLKEYCARDF if the index contains the same set of columns
  • A value between MAX(colcardf of each col) and the product of all the columns COLCARDFs in the group
  • CARDF in SYSTABLES. CARDF must be equal or larger than any other cardinalities, such as COLCARDF, FIRSTKEYCARDF, FULLKEYCARDF, and CARDF in SYSCOLDIST
  • FREQUENCYF and COLCARDF or CARDF. The number of frequencies collected must be less than or equal to COLCARDF for the column or CARDF for the column group
  • FREQUENCYF. The sum of frequencies collected for a column or column group must be less than or equal to 1

New in Db2 11

In Db2 11, the table SYSSTATFEEDBACK was introduced giving us the first chance to see what the optimizer thinks is missing. This is truly awesome, as then we can tailor our RUNSTATS to generate exactly what the optimizer needs to really validate and generate a good, stable access path. Of course, you should be a little bit careful with this data as too much of a good thing can be bad for you!

New in Db2 12

(Not just the lowercase b!)

In Db2 12, the SYSSTATFEEDBACK was made even more interesting by now externalizing the required RUNSTATS options *directly* into the already existing RUNSTATS profile or, indeed, actually creating a RUNSTATS profile for you.

I think that is really dangerous, as then you could easily flood your system with bogus stats for end user QMF/SPUFI queries that were run “by accident,” or so called “boss queries” where someone with *no* idea of SQL clicks together a highly complex and badly written SQL before letting it run for a weekend. Naturally the SQL gets rewritten by a helpful ever present DBA, but the statistics recommendations have now landed in the profile and will be updated and kept from this point on.

My personal recommendation is to switch off this feature as it is sadly *on* by default!

Here are the ZPARMs of interest

ZPARM STATFDBK_SCOPE set to ALL by default.
ZPARM STATFDBK_PROFILE set to YES by default.
Plus, in table SYSIBM.SYSTABLES column STATS_FEEDBACK is set to Y by default.

Out-of-the box it starts automatically creating (for TYPE=’C’ with NUMCOLS > 1 and TYPE=’F’ or ‘H‘) profiles and updating existing profiles…Here you must manually check the size of your profiles every now and again just to make sure everything is ok!

One other new thing in Db2 12, is that XML columns can get statistics now to help XMLEXISTS get a better access path.

and finally

Do not forget that our little Freeware tool StatisticsHealthCheck will find all bad correlations, old data and badly updated data for you and it is FREE!

So I hope this little round-up of Db2 Catalog Statistics data was interesting, and, as usual, if you have any comments or questions, then please, feel free to mail me!

TTFN

Roy Boxwell

 

 

Southwest Db2 Users Group – February 2018

Db2 Forum.  Southwest Db2 Users Group – February 2018 – Grapevine (Dallas), TX, USA

SEGUS & SOFTWARE ENGINEERING proudly sponsor this event & present

1 – Pdf Präsentation Compliance with compliments! Viable Db2 z/OS workload tracking.

2 – Pdf Präsentation Db2 12 Continuous Delivery – New challenges for deployment.

3 – Pdf Präsentation –  Db2 z/OS Lies, Damn Lies, and Statistics… 


1 – Db2 z/OS Security Audit: Compliance with compliments! Viable Db2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces Db2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a Db2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.


More about Db2 Audit

Presentation Outline

  • Audit needs and musts Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
  • Solution overview and their Pros/Cons Get an overview about the existing solutions and understand how they work.
  • The viable way – let Db2 do the magic! Learn about Db2 enhancements in Db2 10/11 that deliver the Db2 workload being processed and understand why it’s so efficient.
  • Customer results from the banking industry Receive some experience from a large banking company and how they successfully replaced their Db2 Audit feature based reporting by a modern SQL tracking and analytics process.

 


2 Db2 12 Continuous Delivery – New challenges for deployment.

Fundamental changes in the Db2 z world often lead to concerns. Let’s face it – some changes force us to change! While a Db2 version migration usually took months, or even years, there will be no new Db2 version after 12, but continuous code drops.

This will have a tremendous impact on migration strategies, because we have to find a reliable way to test these code deliveries in a fraction of time. If we make it, Business Divisions will become enthused at how quickly new technology becomes available for new applications. This presentation will describe the difference between Code, Catalog, Function and Application Levels, how you can control them and how you can fallback in case of anomalies. It also illustrates how we still can be pro-active in testing without burning weeks and months.
Learn how to choose from four different levels of testing and a new way of automation. CD-Screening allows you to pick and choose from KPI based test automation. The levels include simple anomaly alerting, access path verification, clone Pre-apply and even workload capture/replay to easily discover different behaviour resulting from a new code Level.


More about Db2 Continuous Delivery – CD

Presentation Outline

Joining this presentation, you’ll learn how to align Continuous Delivery to your Continuous Availability.

  • Agile, Continuous Delivery, DevOps – just buzz words, or new methodologies?
  • Db2 Code, Catalog, Function and Application Levels – differences and dependencies.
  • Activation/Deactivation of new code and how to fallback and when you can’t.
  • Different flavors of (pro-active) CD-Screening and how it can be automated:

* Anomaly alerting based on Incompatibility Change Indicators (ICIs)
* Dyn./Stat.Access Path Change Detection e.g.via Plan Management
* Clone based code change pre-apply exploiting Backup System
* Workload-KPI verification using SQL replay and KPI comparison

Audience Experience:   Intermediate Advanced
Platform:                        Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration Performance Management Db2 Migration

 


3 Db2 z/OS Lies, Damn Lies, and Statistics…

– Benjamin Disraeli, Prime Minister of England (1868, 1874-1880)

The above line may, or may not, have been spoken well over 100 years ago, but the need for statistics and, above all else, accurate statistics is more important than ever in the Db2 world of today.


More about Db2 RUNSTATS

Presentation Outline

  • Db2 RUNSTATS basics & catalog tables and Columns used for access path
  • IBM recommendations through the ages : from Db2 V3 to Db2 12
  • Db2 RUNSTATS advanced
  • SYSCOLDIST explained
  • RUNSTATS real world Q&A :
    use of SAMPLE, COLGROUP, PROFILE, REOPT (ONCE), TABLESAMPLE SYSTEM, HISTOGRAM, …
  • RUNSTATS reversal

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

Heart of Texas Db2 Users Group – February 2018

HOTDUG – Heart of Texas Db2 User Group – February 2018 –  Austin, TX, USA

SEGUS & SOFTWARE ENGINEERING proudly sponsor this event & present

1 – Pdf PresentationCompliance with compliments! Viable Db2 z/OS workload tracking.

2 Pdf PresentationDb2 12 Continuous Delivery – New challenges for deployment.

3 Pdf PresentationDb2 z/OS Lies, Damn lies, and Statistics… 


1 – Db2 z/OS Security Audit: Compliance with compliments! Viable Db2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces Db2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a Db2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.


More about Db2 Audit

Presentation Outline

  • Audit needs and musts Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
  • Solution overview and their Pros/Cons Get an overview about the existing solutions and understand how they work.
  • The viable way – let Db2 do the magic! Learn about Db2 enhancements in Db2 10/11 that deliver the Db2 workload being processed and understand why it’s so efficient.
  • Customer results from the banking industry Receive some experience from a large banking company and how they successfully replaced their Db2 Audit feature based reporting by a modern SQL tracking and analytics process.

 


2Db2 12 Continuous Delivery – New challenges for deployment.

Fundamental changes in the Db2 z world often lead to concerns. Let’s face it – some changes force us to change! While a Db2 version migration usually took months, or even years, there will be no new Db2 version after 12, but continuous code drops.

This will have a tremendous impact on migration strategies, because we have to find a reliable way to test these code deliveries in a fraction of time. If we make it, Business Divisions will become enthused at how quickly new technology becomes available for new applications. This presentation will describe the difference between Code, Catalog, Function and Application Levels, how you can control them and how you can fallback in case of anomalies. It also illustrates how we still can be pro-active in testing without burning weeks and months.
Learn how to choose from four different levels of testing and a new way of automation. CD-Screening allows you to pick and choose from KPI based test automation. The levels include simple anomaly alerting, access path verification, clone Pre-apply and even workload capture/replay to easily discover different behaviour resulting from a new code Level.


More about Db2 Continuous Delivery – CD

Presentation Outline

Joining this presentation, you’ll learn how to align Continuous Delivery to your Continuous Availability.

  • Agile, Continuous Delivery, DevOps – just buzz words, or new methodologies?
  • Db2 Code, Catalog, Function and Application Levels – differences and dependencies.
  • Activation/Deactivation of new code and how to fallback and when you can’t.
  • Different flavors of (pro-active) CD-Screening and how it can be automated:

* Anomaly alerting based on Incompatibility Change Indicators (ICIs)
* Dyn./Stat.Access Path Change Detection e.g.via Plan Management
* Clone based code change pre-apply exploiting Backup System
* Workload-KPI verification using SQL replay and KPI comparison

Audience Experience:   Intermediate Advanced
Platform:                        Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration Performance Management Db2 Migration

 


3Db2 z/OS Lies, Damn lies, and Statistics…

– Benjamin Disraeli, Prime Minister of England (1868, 1874-1880)

The above line may, or may not, have been spoken well over 100 years ago, but the need for statistics and, above all else, accurate statistics is more important than ever in the Db2 world of today.


More about Db2 RUNSTATS

Presentation Outline

  • Db2 RUNSTATS basics & catalog tables and Columns used for access path
  • IBM recommendations through the ages : from Db2 V3 to Db2 12
  • Db2 RUNSTATS advanced
  • SYSCOLDIST explained
  • RUNSTATS real world Q&A :
    use of SAMPLE, COLGROUP, PROFILE, REOPT (ONCE), TABLESAMPLE SYSTEM, HISTOGRAM, …
  • RUNSTATS reversal

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

IDUG Db2 Tech Conference 2018 in Philadelphia

IDUG Db2 Tech Conference 2018 in Philadelphia, Pennsylvania, USA

IDUG North America

SEGUS & SOFTWARE ENGINEERING present:

Db2 12 – 12 months agile production experience with more focus on the Data and less on the base

> Pdf Presentation


Migrating to Db2 12 is pretty much the same procedure like every 3 years. Then we get introduced to a variety of levels that control new features and functions being delivered and realize that the every-3-years-same-procedure is history. The levels, their dependencies and behavior are a fundamental change, which was implemented to control agile and more specifically it’s associated Continuous Delivery of future upgrades and fixes. As usual, it comes with benefits, but also challenges.

This presentation not only covers customers experiences migrating to Db2 12, but also how the changed maintenance approach has forced them to change existing procedures.

The second part is the daily operation and exploitation using Db2 12 enhancements from customers of different industries.

The over-all results aren’t bad, but a major conclusion is also that this Db2 version is significantly different than other versions before. It’s the first agile version with all its pros and cons. According to IBMs July 2017 announcement, the new database design “represents the elemental nature of Db2 (think periodic table) and connotes the fundamental importance of hybrid data management“. Where does this materialize in our daily work with Db2? Join this presentation and benefit from the experience of other Db2 customers around the world running Db2 12.


More about Db2 Continuous Delivery – CD

Presentation Outline

  1. a – Db2 migration and maintenance – meet the agile and continuous delivery face of Db2.
    b – Db2 Code, Catalog, Function and Application Levels – differences, dependencies and how to successfully manage them.
  2. Running Db2 12 and exploiting new features – what’s hot and how to exploit it best?
  3. Emphasizing the Data – what has changed since July and what are the hybrid aspects managing Data?
  4. Top 10 list of tips and gotchas from a year of Db2 12 production Data bases around the world

Audience Experience:   Intermediate Advanced
Platform:                         Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration, Performance Management, Db2 Migration

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

2017-11 Db2 APAR list: An APAR a day keeps the bugs at bay

Db2 12 APAR – time saving list

With all the talk about “agile” going on, and referring to one of my older 2017-09 newsletters* on APARS, I think it is time to tell you all about another little service that our company offers.

(completely free, simple and no marketing spam)


Db2 12 Agile & APAR previous newsletter:

2017-09: Db2 12 SQL Access path: Death by APAR :  How many APARs really can affect access paths?

APAR Database

The APAR database can be accessed by anyone with an IBM Userid and you can merrily search to see if the problem that you have hit is already found and fixed, or a fix is in the works.

The problem is: What about the bugs that you do not *know* you have hit?

Get someone else to do the donkey work

The answer is to get someone else to do all the research for you, and deliver the answer by e-mail every month for all current releases of Db2 going back two years.

Who is that donkey? You will never guess…  😉

Three is the key

There are three Excel spread sheets here.


  1. SQL Performance
    The first is RTS. This lists any and all APARs to do with the Real-Time Statistics tables. If you rely on these tables to decide when to run REORG, COPY and RUNSTATS, then you want to make sure that they are being correctly updated, don’t you?


  2. SQL Access Path
    The second is RUNSTATS. RUNSTATS is a critical utility program and contains bugs like any other non-trivial program. I count RUNSTATS bugs as *always* a personal HIPER. I rely on statistics and so does the Db2 Optimizer. If there is bad data here, then your access paths have no chance!


  3. Performance PTFs
    Finally, and the biggest list, is those APARs that have anything to do with SQL Performance and SQL Access paths (If not already in the RUNSTATS list of course!) Performance PTFs are pretty important!

Going hyper over HIPER

All the APARs have their related PTF. A handy little HIPER column tells you whether or not this is a really important fix. As I mentioned, I treat the RUNSTATS ones as personal HIPERS. Also added is a PE indicator when a PTF goes bad on you (PTF in Error) so you can see if you introduced a problem by correcting another.

One Excel Example: The SQL Performance spread sheet

Db2 12 APAR list free for Db2 z/OS: SQL Performance (RTS) - Performance PTFs - SQL Access Path (RUNSTATS) -

This is from the SQL Performance spread sheet and you can see how it looks

The same table in HTML:

APARCLOSEDSTATUSDb2 10 Db2 11Db2 12HYPERDescription
PI85305
2017-11-01
Closed
N/A
UI51606
UI51601
INEFFICIENT INDEX CHOSEN WHEN INDEX CAN DO INDEX SKIPPING…
 PI85418
 2017-19-25
Modified
UI50098
UI50099
UI50582
PREPARE TAKES LONG TIME AND HIGH CPU IF THE QUERY CONTAINS MA…
 PI85463
New & Closed
 N/A
 N/A
UI51342
TABLESPACE SCAN INSTEAD OF INDEX ACCESS – WITH INDEX HAVIN…

 …

 …

 …

 …

 …

 …

 …

 …

Interested in making your life a little bit easier and safer?

If you are interested in Db2 APARs and you have *no* time to go and scrabble around the internet trying to find out which APARs are needed, worthwhile, or dangerous, then

just email our technical support techsupport@seg.de and ask to be added to the APAR Mailing list.

It is completely free, you will not get marketing spam and it really is as simple as that!

 

Get AGILE now!

With these handy lists, you can quickly and easily review the state of your Db2, and react quickly and in a timely manner when you have to.

 

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

 

Roy Boxwell

2017-10 Db2 log size: How big is your LOG?

 

How to validate that your Db2 LOGs are OK ?

Sizing of LOGs changed quite a bit in Db2 11 and Db2 12, so I thought a little recap about the correct sizing and ZPARMS was in order this month.

– Db2 Active logs – how many?

The perennial favourite “just enough” is of course the correct answer! Naturally you want to make sure that about 24 hours’ worth of data is available on your active logs as reading the archives, especially if they have been migrated to tape, is a real performance killer during restart and rollback processing!

The range of active logs you can have is from 2 to 93 with 3 being the default.

– Db2 Archive logs – how many?

Another perennial favourite “just enough” is of course the correct answer! (See more details later in this newsletter.)

The range of archive logs you can have is from 10 to 10,000, with 10,000 being the default.

Where am I?

How can you find out what your system is doing? Simply run the DSNJU004 BSDS stand-alone print routine to look at the current data. Here is an example JCL to do it all for you:

//BSDS     EXEC PGM=DSNJU004
//STEPLIB  DD DSN=<your.exit.lib>,DISP=SHR
//         DD DSN=<your.load.lib>,DISP=SHR
//SYSUT1   DD DSN=<your.BSDS 01>,DISP=SHR
//SYSUT2   DD DSN=<your.BSDS 02>,DISP=SHR
//SYSPRINT DD SYSOUT=*

And here is the output from my little Db2 12 subsystem, cut down to show just the bits I want to use!

ACTIVE LOG COPY 1 DATA SETS
START RBA/TIME       END RBA/TIME          DATE/LTIME DATA SET INFORMATION
-------------------- --------------------- ---------- --------------------
000000000013516C0000  0000000000135387FFFF  2016.068 DSN=DC10.LOGCOPY1.DS01
2017.307  12:32:08.6  2017.308  20:54:21.1  17:27    STATUS=REUSABLE
00000000001353880000  00000000001355A3FFFF  2016.068 DSN=DC10.LOGCOPY1.DS02
2017.308  20:54:21.1  2017.310  07:31:00.0  17:27    STATUS=REUSABLE
00000000001355A40000  00000000001357BFFFFF  2016.068 DSN=DC10.LOGCOPY1.DS03
2017.310  07:31:00.0  ........  ..........  17:27    STATUS=NOTREUSABLE

ARCHIVE LOG COPY 1 DATA SETS
START RBA/TIME        END RBA/TIME         DATE/LTIME DATA SET INFORMATION
--------------------  -------------------- ---------- --------------------
00000000000000000000  000000000000021BFFFF  2016.068  DSN=DC10.ARCHLOG1.D16068.T1743011.A0000001
2016.068  18:37:21.8  2016.068  18:43:00.9  17:43     VOL=SE121D UNIT=SYSALLDA
                                                                 CATALOGUED
000000000000021C0000  0000000000000437FFFF  2016.068  DSN=DC10.ARCHLOG1.D16068.T1743391.A0000002
2016.068  18:43:00.9  2016.068  18:43:39.1  17:43     VOL=SE121D UNIT=SYSALLD
                                                                 CATALOGUED
00000000000004380000  0000000000000653FFFF  2016.069  DSN=DC10.ARCHLOG1.D16069.T0756574.A0000003
2016.068  18:43:39.1  2016.069  08:56:57.3  7:56      VOL=SE121D UNIT=SYSALLD
                                                                 CATALOGUED

From this you can see that I have three active logs that last about one to two days, and the complete archives from the first install are still available – that will probably not be true in your case!

Now, much later in the output, you can see where I am today:

000000000013516C0000  0000000000135387FFFF  2017.308   
DSN=DC10.ARCHLOG1.D17308.T1954212.A0002638
2017.307  12:32:08.6   2017.308  20:54:21.1 19:54 VOL=SE123 UNIT=SYSALLD
                                                            CATALOGUED
00000000001353880000 00000000001355A3FFFF   2017.310
DSN=DC10.ARCHLOG1.D17310.T0631002.A0002639
2017.308 20:54:21.1    2017.310  07:31:00.0 6:31 VOL=SE123F UNIT=SYSALLD
                                                            CATALOGUED

Then the COPY 2 data where you can see that I do not even archive the COPY 2 logs. Again this should not be the case in your shop.

ACTIVE LOG COPY 2 DATA SETS
START RBA/TIME        END RBA/TIME         DATE/LTIME  DATA SET INFORMATION
--------------------  -------------------- ----------  -------------------- 
000000000013516C0000  0000000000135387FFFF  2016.068  DSN=DC10.LOGCOPY2.DS01
2017.307  12:32:08.6  2017.308  20:54:21.1  17:27     STATUS=REUSABLE
00000000001353880000  00000000001355A3FFFF  2016.068  DSN=DC10.LOGCOPY2.DS02
2017.308  20:54:21.1  2017.310  07:31:00.0  17:27     STATUS=REUSABLE
00000000001355A40000  00000000001357BFFFFF  2016.068  DSN=DC10.LOGCOPY2.DS03
2017.310  07:31:00.0    ........  ........  17:27     STATUS=NOTREUSABLE
ARCHIVE LOG COPY 2 DATA SETS
NO ARCHIVE DATA SETS DEFINED FOR THIS COPY

From this data you can work out a few simple things:

1)      How often do you fill an active log?

2)      How many archive logs do you have in the BSDS, and do they actually still exist?

 

  • You can now also check that all your active logs have the same dataset characteristics (Primary Quantity, Secondary, etc.).
  • If you are filling up more than one log per 10 minutes, you may want to check the size of your logs. The size of the active log can be set to an absolute physical maximum of 5,825 cylinders (87,375 TRKs) for non-EAV and 5,817 cylinders (87,255 TRKs) for EAV. Why the strange difference? Because EAV can only allocate in blocks of 21 cylinders due to the way the EAV volumes work. (See later for more information about using more than 65,535 TRKs for an archive log though!)
  • Space allocation:
    The initial allocation of space is done through the Install CLIST and the tailored sample JCL in the DSNTIJIN member. That is where you can override whatever values are generated by Db2 into “better” ones for your system. In the bad old days the allocation was in RECORDS and today I see KILOBYTES most of the time – which is still a bit of a headache when you want to allocate in CYLINDERS but it is all doable, as the DEFINE CLUSTER syntax allows these size definitions:
CYLINDERS(primary)
KILOBYTES(primary)
MEGABYTES(primary)
RECORDS(primary)
TRACKS(primary)

Log ZPARM of Interest

For the log, there is really only one ZPARM of any real importance when it comes to performance and that is OUTBUFF. Set this to be the biggest you can “afford” in terms of real memory, as it takes the memory as fixed! 400,000 KB is the current max, and this is the IBM recommended value. Reading from memory for rollback is a lot quicker than from active log, which is a lot faster than reading the archive log.

From the documentation:

The larger the output buffer, the more likely that a log read request can return the required log records without needing I/O operations. This is a particularly important consideration if the DB2 subsystem is to be used with a data replication product that uses IFCID 306. In DB2 12, log buffers are permanently page-fixed. When you estimate real storage usage, you must use the entire size that you specify for the OUTBUFF parameter. To avoid page-fixing more storage than necessary, carefully choose the setting for OUTBUFF.
 
Choose the largest size that your system can tolerate for the log output buffer. Because the pages for the log output buffer are permanently fixed in real storage, choose the largest size that you can dedicate in real storage. A larger size for the log output buffer might decrease the number of forced I/O operations that occur because additional buffers are unavailable, and can also reduce the number of wait conditions.

Archive logs are different

The size of the archive logs and how many are recorded in the BSDS are input parameters on installation panel DSNTIPA:

DSNTIPA INSTALL DB2 - ARCHIVE LOG DATA SET PARAMETERS
===> _
Enter data below:
1 PRIMARY QUANTITY ===> 125    Primary space allocation in cylinders
2 SECONDARY QTY    ===> 15     Secondary space allocation in cylinders
3 DEVICE TYPE 1    ===> TAPE   Unit name for COPY1 archive logs
4 DEVICE TYPE 2    ===>        Unit name for COPY2 archive logs
5 BLOCK SIZE       ===> 24576  Rounded up to 4096 multiple
6 READ TAPE UNITS  ===> 2      Number of allocated read tape units
7 DEALLOC PERIOD   ===> 0      Time interval to deallocate tape units
8 RECORDING MAX    ===> 10000  Number of data sets recorded in BSDS

In Db2 10 and Db2 11 you also had an allocation unit (ALCUNIT ZPARM) input field with valid values of BLK, TRK or CYL with a default of BLK.

The PRIQTY now has a range from 1 to 4,369 and a default of 125. (Remember the absolute physical maximum earlier of 5,825/5,817 CYLs? That is why you can/should edit the DSNTIJIN member!)

If migrating from Db2 11, the ALCUNIT value is used to convert from BLK or TRK to CYLs in Db2 12. If not yet in Db2 12 then the ALCUNIT tells you what the allocation unit is for the PRIQTY in either BLK, TRK or CYL with a range of 1 to 999,999.

Some further info about the size of the archive logs here. Remember that they are flat files *not* VSAM Linear Datasets:

By default, DFSMS Direct Access Device Space Management (DADSM) limits the space allocation on a single volume to less than 65535 tracks. Therefore, if the archive log data set size can be greater than or equal to 65535 tracks, you need to specify a primary space quantity of less than 65535 tracks. This action forces the archive log data set to extend to a second volume.

Alternatively, the archive log data sets can be allocated by a DFSMS data class that has a DSNMTYP (data set name type) setting of LARGE or EXT. In this case, you can specify a primary space quantity larger than 65535 tracks without extending to a second volume. LARGE indicates that data sets in the data class are to be allocated in large physical sequential format. EXT indicates that data sets in the data class are to be allocated in extended physical sequential format. A setting of EXT is recommended, and it is required for striping of data sets. If you specify EXT, also set the IFEXT (if extended) parameter to R (required) rather than P (preferred).

The SECQTY has a range from 1 to 4,369 and a default of 15. Same rules for ALCUNIT and switching to CYL in Db2 12 apply here as per the PRIQTY.

The recording max just tells you how many archive log dataset names the BSDS can remember. The documentation mentions two very important facts about this number:

When this number is exceeded, recording resumes at the beginning of the BSDS.

You must create image copies of all DB2 objects, probably several times, before the archive log data sets are discarded. If you fail to retain an adequate number of archive log data sets for all the image copies, you might need to cold start or reinstall DB2. In both cases, data is lost.

So there are two top tips for you! Check if you have had a wrap around and check if you have datasets hanging around that Db2 can no longer allocate and use.

Striped or not?

Finally, for logs, the striping of the active logs is nowadays not recommended, as devices are so much faster, but striping and/or compression of the archive logs is still recommended if space is an issue.

From the Db2 Documentation:

In most cases, do not stripe active log data sets. You can use DFSMS to the stripe the logs, but striping is generally unnecessary with the latest devices. Striping increases the number of I/Os, which can increase CPU time and lead to potentially greater DB2 commit times. Striping might improve the performance of batch insert jobs, but it might also harm the performance of online transaction processing. Striping is especially risky for performance if you replicate the logs over long distances.
Consider striping and compressing archive log data sets by using DFSMS. Doing so might speed up the time to offload the logs and the time to recover by using archive logs. However, the performance of DFSMS striping and compression depends on the z/OS release and the types of hardware that you use.

Check your CHECKPOINTS

Getting back to the output of the DSNJU004 job, a few more things like conditional restart recs etc. are output and then the Checkpoint Queue:

 

                    CHECKPOINT QUEUE                
                08:00:31 NOVEMBER 06, 2017          
TIME OF CHECKPOINT       07:31:03 NOVEMBER 06, 2017 
BEGIN CHECKPOINT RBA            00000000001355B750DC
END CHECKPOINT RBA              00000000001355B81B18
END CHECKPOINT STCK             00D365BB89179A529200
TIME OF CHECKPOINT       20:54:23 NOVEMBER 04, 2017 
BEGIN CHECKPOINT RBA            000000000013538813C5
END CHECKPOINT RBA              00000000001353889D76
END CHECKPOINT STCK             00D363EB5D1777CFB000
TIME OF CHECKPOINT       12:32:09 NOVEMBER 03, 2017 
BEGIN CHECKPOINT RBA            00000000001351723326
END CHECKPOINT RBA              0000000000135172D899
END CHECKPOINT STCK             00D362393D4E6F748C00
TIME OF CHECKPOINT       07:30:11 NOVEMBER 03, 2017 
BEGIN CHECKPOINT RBA            0000000000134F5915DD
END CHECKPOINT RBA              0000000000134F59C445
END CHECKPOINT STCK             00D361F5BEB5A941CA00
TIME OF CHECKPOINT       07:42:59 NOVEMBER 01, 2017 
BEGIN CHECKPOINT RBA            0000000000134D34EB53
END CHECKPOINT RBA              0000000000134D357022
END CHECKPOINT STCK             00D35F74E054AE870800

You can see how busy my Db2 system is, anyway your data will be a lot different I am sure!

Checkpoint ZPARMS of interest


CHKTYPE 

LOGRECS/MINUTES/BOTH         MINUTES is default and is what I recommend. This parameter can still have the old value “SINGLE” which then respects the CHKFREQ value as if it were minutes or log records depending on the size of the Number. Change this to Minutes!


CHKFREQ

If LOGRECS how many log records before a checkpoint 1,000 – 16,000,000
If MINUTES how many minutes before a checkpoint 1 – 60


CHKLOGR

If BOTH how many log records before a checkpoint 1,000 – 99,999,999


CHKMINS

If BOTH how many minutes before doing a Checkpoint. 1 – 1439


IBM, and I, recommend a checkpoint every five minutes or so to get optimum rollback and log usage. Naturally your mileage may vary. Also remember that Db2 writes a checkpoint at active log change as well, so try and guarantee that at least ten checkpoints’ “worth” of data fits onto one log dataset.

One extra note arrived in Db2 11:

If the CHECKPOINT TYPE field is set to BOTH, DB2 prevents log checkpoints from being taken too frequently and degrading performance. Log checkpoints are scheduled, but not necessarily immediately taken, when the number of minutes that passes reaches the value of CHKMINS.


Following all this advice should mean that your logs settle down and your system runs a little bit smoother!

 

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

 

Roy Boxwell