Southwest Db2 Users Group – February 2018

Db2 Forum.  Southwest Db2 Users Group – February 2018 – Grapevine (Dallas), TX, USA

SEGUS & SOFTWARE ENGINEERING proudly sponsor this event & present

1 – Pdf Präsentation Compliance with compliments! Viable Db2 z/OS workload tracking.

2 – Pdf Präsentation Db2 12 Continuous Delivery – New challenges for deployment.

3 – Pdf Präsentation –  Db2 z/OS Lies, Damn Lies, and Statistics… 


1 – Db2 z/OS Security Audit: Compliance with compliments! Viable Db2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces Db2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a Db2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.


More about Db2 Audit

Presentation Outline

  • Audit needs and musts Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
  • Solution overview and their Pros/Cons Get an overview about the existing solutions and understand how they work.
  • The viable way – let Db2 do the magic! Learn about Db2 enhancements in Db2 10/11 that deliver the Db2 workload being processed and understand why it’s so efficient.
  • Customer results from the banking industry Receive some experience from a large banking company and how they successfully replaced their Db2 Audit feature based reporting by a modern SQL tracking and analytics process.

 


2 Db2 12 Continuous Delivery – New challenges for deployment.

Fundamental changes in the Db2 z world often lead to concerns. Let’s face it – some changes force us to change! While a Db2 version migration usually took months, or even years, there will be no new Db2 version after 12, but continuous code drops.

This will have a tremendous impact on migration strategies, because we have to find a reliable way to test these code deliveries in a fraction of time. If we make it, Business Divisions will become enthused at how quickly new technology becomes available for new applications. This presentation will describe the difference between Code, Catalog, Function and Application Levels, how you can control them and how you can fallback in case of anomalies. It also illustrates how we still can be pro-active in testing without burning weeks and months.
Learn how to choose from four different levels of testing and a new way of automation. CD-Screening allows you to pick and choose from KPI based test automation. The levels include simple anomaly alerting, access path verification, clone Pre-apply and even workload capture/replay to easily discover different behaviour resulting from a new code Level.


More about Db2 Continuous Delivery – CD

Presentation Outline

Joining this presentation, you’ll learn how to align Continuous Delivery to your Continuous Availability.

  • Agile, Continuous Delivery, DevOps – just buzz words, or new methodologies?
  • Db2 Code, Catalog, Function and Application Levels – differences and dependencies.
  • Activation/Deactivation of new code and how to fallback and when you can’t.
  • Different flavors of (pro-active) CD-Screening and how it can be automated:

* Anomaly alerting based on Incompatibility Change Indicators (ICIs)
* Dyn./Stat.Access Path Change Detection e.g.via Plan Management
* Clone based code change pre-apply exploiting Backup System
* Workload-KPI verification using SQL replay and KPI comparison

Audience Experience:   Intermediate Advanced
Platform:                        Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration Performance Management Db2 Migration

 


3 Db2 z/OS Lies, Damn Lies, and Statistics…

– Benjamin Disraeli, Prime Minister of England (1868, 1874-1880)

The above line may, or may not, have been spoken well over 100 years ago, but the need for statistics and, above all else, accurate statistics is more important than ever in the Db2 world of today.


More about Db2 RUNSTATS

Presentation Outline

  • Db2 RUNSTATS basics & catalog tables and Columns used for access path
  • IBM recommendations through the ages : from Db2 V3 to Db2 12
  • Db2 RUNSTATS advanced
  • SYSCOLDIST explained
  • RUNSTATS real world Q&A :
    use of SAMPLE, COLGROUP, PROFILE, REOPT (ONCE), TABLESAMPLE SYSTEM, HISTOGRAM, …
  • RUNSTATS reversal

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

Heart of Texas Db2 Users Group – February 2018

HOTDUG – Heart of Texas Db2 User Group – February 2018 –  Austin, TX, USA

SEGUS & SOFTWARE ENGINEERING proudly sponsor this event & present

1 – Pdf PresentationCompliance with compliments! Viable Db2 z/OS workload tracking.

2 Pdf PresentationDb2 12 Continuous Delivery – New challenges for deployment.

3 Pdf PresentationDb2 z/OS Lies, Damn lies, and Statistics… 


1 – Db2 z/OS Security Audit: Compliance with compliments! Viable Db2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces Db2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a Db2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.


More about Db2 Audit

Presentation Outline

  • Audit needs and musts Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
  • Solution overview and their Pros/Cons Get an overview about the existing solutions and understand how they work.
  • The viable way – let Db2 do the magic! Learn about Db2 enhancements in Db2 10/11 that deliver the Db2 workload being processed and understand why it’s so efficient.
  • Customer results from the banking industry Receive some experience from a large banking company and how they successfully replaced their Db2 Audit feature based reporting by a modern SQL tracking and analytics process.

 


2Db2 12 Continuous Delivery – New challenges for deployment.

Fundamental changes in the Db2 z world often lead to concerns. Let’s face it – some changes force us to change! While a Db2 version migration usually took months, or even years, there will be no new Db2 version after 12, but continuous code drops.

This will have a tremendous impact on migration strategies, because we have to find a reliable way to test these code deliveries in a fraction of time. If we make it, Business Divisions will become enthused at how quickly new technology becomes available for new applications. This presentation will describe the difference between Code, Catalog, Function and Application Levels, how you can control them and how you can fallback in case of anomalies. It also illustrates how we still can be pro-active in testing without burning weeks and months.
Learn how to choose from four different levels of testing and a new way of automation. CD-Screening allows you to pick and choose from KPI based test automation. The levels include simple anomaly alerting, access path verification, clone Pre-apply and even workload capture/replay to easily discover different behaviour resulting from a new code Level.


More about Db2 Continuous Delivery – CD

Presentation Outline

Joining this presentation, you’ll learn how to align Continuous Delivery to your Continuous Availability.

  • Agile, Continuous Delivery, DevOps – just buzz words, or new methodologies?
  • Db2 Code, Catalog, Function and Application Levels – differences and dependencies.
  • Activation/Deactivation of new code and how to fallback and when you can’t.
  • Different flavors of (pro-active) CD-Screening and how it can be automated:

* Anomaly alerting based on Incompatibility Change Indicators (ICIs)
* Dyn./Stat.Access Path Change Detection e.g.via Plan Management
* Clone based code change pre-apply exploiting Backup System
* Workload-KPI verification using SQL replay and KPI comparison

Audience Experience:   Intermediate Advanced
Platform:                        Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration Performance Management Db2 Migration

 


3Db2 z/OS Lies, Damn lies, and Statistics…

– Benjamin Disraeli, Prime Minister of England (1868, 1874-1880)

The above line may, or may not, have been spoken well over 100 years ago, but the need for statistics and, above all else, accurate statistics is more important than ever in the Db2 world of today.


More about Db2 RUNSTATS

Presentation Outline

  • Db2 RUNSTATS basics & catalog tables and Columns used for access path
  • IBM recommendations through the ages : from Db2 V3 to Db2 12
  • Db2 RUNSTATS advanced
  • SYSCOLDIST explained
  • RUNSTATS real world Q&A :
    use of SAMPLE, COLGROUP, PROFILE, REOPT (ONCE), TABLESAMPLE SYSTEM, HISTOGRAM, …
  • RUNSTATS reversal

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

2017-10 Db2 log size: How big is your LOG?

 

How to validate that your Db2 LOGs are OK ?

Sizing of LOGs changed quite a bit in Db2 11 and Db2 12, so I thought a little recap about the correct sizing and ZPARMS was in order this month.

– Db2 Active logs – how many?

The perennial favourite “just enough” is of course the correct answer! Naturally you want to make sure that about 24 hours’ worth of data is available on your active logs as reading the archives, especially if they have been migrated to tape, is a real performance killer during restart and rollback processing!

The range of active logs you can have is from 2 to 93 with 3 being the default.

– Db2 Archive logs – how many?

Another perennial favourite “just enough” is of course the correct answer! (See more details later in this newsletter.)

The range of archive logs you can have is from 10 to 10,000, with 10,000 being the default.

Where am I?

How can you find out what your system is doing? Simply run the DSNJU004 BSDS stand-alone print routine to look at the current data. Here is an example JCL to do it all for you:

//BSDS     EXEC PGM=DSNJU004
//STEPLIB  DD DSN=<your.exit.lib>,DISP=SHR
//         DD DSN=<your.load.lib>,DISP=SHR
//SYSUT1   DD DSN=<your.BSDS 01>,DISP=SHR
//SYSUT2   DD DSN=<your.BSDS 02>,DISP=SHR
//SYSPRINT DD SYSOUT=*

And here is the output from my little Db2 12 subsystem, cut down to show just the bits I want to use!

ACTIVE LOG COPY 1 DATA SETS
START RBA/TIME       END RBA/TIME          DATE/LTIME DATA SET INFORMATION
-------------------- --------------------- ---------- --------------------
000000000013516C0000  0000000000135387FFFF  2016.068 DSN=DC10.LOGCOPY1.DS01
2017.307  12:32:08.6  2017.308  20:54:21.1  17:27    STATUS=REUSABLE
00000000001353880000  00000000001355A3FFFF  2016.068 DSN=DC10.LOGCOPY1.DS02
2017.308  20:54:21.1  2017.310  07:31:00.0  17:27    STATUS=REUSABLE
00000000001355A40000  00000000001357BFFFFF  2016.068 DSN=DC10.LOGCOPY1.DS03
2017.310  07:31:00.0  ........  ..........  17:27    STATUS=NOTREUSABLE

ARCHIVE LOG COPY 1 DATA SETS
START RBA/TIME        END RBA/TIME         DATE/LTIME DATA SET INFORMATION
--------------------  -------------------- ---------- --------------------
00000000000000000000  000000000000021BFFFF  2016.068  DSN=DC10.ARCHLOG1.D16068.T1743011.A0000001
2016.068  18:37:21.8  2016.068  18:43:00.9  17:43     VOL=SE121D UNIT=SYSALLDA
                                                                 CATALOGUED
000000000000021C0000  0000000000000437FFFF  2016.068  DSN=DC10.ARCHLOG1.D16068.T1743391.A0000002
2016.068  18:43:00.9  2016.068  18:43:39.1  17:43     VOL=SE121D UNIT=SYSALLD
                                                                 CATALOGUED
00000000000004380000  0000000000000653FFFF  2016.069  DSN=DC10.ARCHLOG1.D16069.T0756574.A0000003
2016.068  18:43:39.1  2016.069  08:56:57.3  7:56      VOL=SE121D UNIT=SYSALLD
                                                                 CATALOGUED

From this you can see that I have three active logs that last about one to two days, and the complete archives from the first install are still available – that will probably not be true in your case!

Now, much later in the output, you can see where I am today:

000000000013516C0000  0000000000135387FFFF  2017.308   
DSN=DC10.ARCHLOG1.D17308.T1954212.A0002638
2017.307  12:32:08.6   2017.308  20:54:21.1 19:54 VOL=SE123 UNIT=SYSALLD
                                                            CATALOGUED
00000000001353880000 00000000001355A3FFFF   2017.310
DSN=DC10.ARCHLOG1.D17310.T0631002.A0002639
2017.308 20:54:21.1    2017.310  07:31:00.0 6:31 VOL=SE123F UNIT=SYSALLD
                                                            CATALOGUED

Then the COPY 2 data where you can see that I do not even archive the COPY 2 logs. Again this should not be the case in your shop.

ACTIVE LOG COPY 2 DATA SETS
START RBA/TIME        END RBA/TIME         DATE/LTIME  DATA SET INFORMATION
--------------------  -------------------- ----------  -------------------- 
000000000013516C0000  0000000000135387FFFF  2016.068  DSN=DC10.LOGCOPY2.DS01
2017.307  12:32:08.6  2017.308  20:54:21.1  17:27     STATUS=REUSABLE
00000000001353880000  00000000001355A3FFFF  2016.068  DSN=DC10.LOGCOPY2.DS02
2017.308  20:54:21.1  2017.310  07:31:00.0  17:27     STATUS=REUSABLE
00000000001355A40000  00000000001357BFFFFF  2016.068  DSN=DC10.LOGCOPY2.DS03
2017.310  07:31:00.0    ........  ........  17:27     STATUS=NOTREUSABLE
ARCHIVE LOG COPY 2 DATA SETS
NO ARCHIVE DATA SETS DEFINED FOR THIS COPY

From this data you can work out a few simple things:

1)      How often do you fill an active log?

2)      How many archive logs do you have in the BSDS, and do they actually still exist?

 

  • You can now also check that all your active logs have the same dataset characteristics (Primary Quantity, Secondary, etc.).
  • If you are filling up more than one log per 10 minutes, you may want to check the size of your logs. The size of the active log can be set to an absolute physical maximum of 5,825 cylinders (87,375 TRKs) for non-EAV and 5,817 cylinders (87,255 TRKs) for EAV. Why the strange difference? Because EAV can only allocate in blocks of 21 cylinders due to the way the EAV volumes work. (See later for more information about using more than 65,535 TRKs for an archive log though!)
  • Space allocation:
    The initial allocation of space is done through the Install CLIST and the tailored sample JCL in the DSNTIJIN member. That is where you can override whatever values are generated by Db2 into “better” ones for your system. In the bad old days the allocation was in RECORDS and today I see KILOBYTES most of the time – which is still a bit of a headache when you want to allocate in CYLINDERS but it is all doable, as the DEFINE CLUSTER syntax allows these size definitions:
CYLINDERS(primary)
KILOBYTES(primary)
MEGABYTES(primary)
RECORDS(primary)
TRACKS(primary)

Log ZPARM of Interest

For the log, there is really only one ZPARM of any real importance when it comes to performance and that is OUTBUFF. Set this to be the biggest you can “afford” in terms of real memory, as it takes the memory as fixed! 400,000 KB is the current max, and this is the IBM recommended value. Reading from memory for rollback is a lot quicker than from active log, which is a lot faster than reading the archive log.

From the documentation:

The larger the output buffer, the more likely that a log read request can return the required log records without needing I/O operations. This is a particularly important consideration if the DB2 subsystem is to be used with a data replication product that uses IFCID 306. In DB2 12, log buffers are permanently page-fixed. When you estimate real storage usage, you must use the entire size that you specify for the OUTBUFF parameter. To avoid page-fixing more storage than necessary, carefully choose the setting for OUTBUFF.
 
Choose the largest size that your system can tolerate for the log output buffer. Because the pages for the log output buffer are permanently fixed in real storage, choose the largest size that you can dedicate in real storage. A larger size for the log output buffer might decrease the number of forced I/O operations that occur because additional buffers are unavailable, and can also reduce the number of wait conditions.

Archive logs are different

The size of the archive logs and how many are recorded in the BSDS are input parameters on installation panel DSNTIPA:

DSNTIPA INSTALL DB2 - ARCHIVE LOG DATA SET PARAMETERS
===> _
Enter data below:
1 PRIMARY QUANTITY ===> 125    Primary space allocation in cylinders
2 SECONDARY QTY    ===> 15     Secondary space allocation in cylinders
3 DEVICE TYPE 1    ===> TAPE   Unit name for COPY1 archive logs
4 DEVICE TYPE 2    ===>        Unit name for COPY2 archive logs
5 BLOCK SIZE       ===> 24576  Rounded up to 4096 multiple
6 READ TAPE UNITS  ===> 2      Number of allocated read tape units
7 DEALLOC PERIOD   ===> 0      Time interval to deallocate tape units
8 RECORDING MAX    ===> 10000  Number of data sets recorded in BSDS

In Db2 10 and Db2 11 you also had an allocation unit (ALCUNIT ZPARM) input field with valid values of BLK, TRK or CYL with a default of BLK.

The PRIQTY now has a range from 1 to 4,369 and a default of 125. (Remember the absolute physical maximum earlier of 5,825/5,817 CYLs? That is why you can/should edit the DSNTIJIN member!)

If migrating from Db2 11, the ALCUNIT value is used to convert from BLK or TRK to CYLs in Db2 12. If not yet in Db2 12 then the ALCUNIT tells you what the allocation unit is for the PRIQTY in either BLK, TRK or CYL with a range of 1 to 999,999.

Some further info about the size of the archive logs here. Remember that they are flat files *not* VSAM Linear Datasets:

By default, DFSMS Direct Access Device Space Management (DADSM) limits the space allocation on a single volume to less than 65535 tracks. Therefore, if the archive log data set size can be greater than or equal to 65535 tracks, you need to specify a primary space quantity of less than 65535 tracks. This action forces the archive log data set to extend to a second volume.

Alternatively, the archive log data sets can be allocated by a DFSMS data class that has a DSNMTYP (data set name type) setting of LARGE or EXT. In this case, you can specify a primary space quantity larger than 65535 tracks without extending to a second volume. LARGE indicates that data sets in the data class are to be allocated in large physical sequential format. EXT indicates that data sets in the data class are to be allocated in extended physical sequential format. A setting of EXT is recommended, and it is required for striping of data sets. If you specify EXT, also set the IFEXT (if extended) parameter to R (required) rather than P (preferred).

The SECQTY has a range from 1 to 4,369 and a default of 15. Same rules for ALCUNIT and switching to CYL in Db2 12 apply here as per the PRIQTY.

The recording max just tells you how many archive log dataset names the BSDS can remember. The documentation mentions two very important facts about this number:

When this number is exceeded, recording resumes at the beginning of the BSDS.

You must create image copies of all DB2 objects, probably several times, before the archive log data sets are discarded. If you fail to retain an adequate number of archive log data sets for all the image copies, you might need to cold start or reinstall DB2. In both cases, data is lost.

So there are two top tips for you! Check if you have had a wrap around and check if you have datasets hanging around that Db2 can no longer allocate and use.

Striped or not?

Finally, for logs, the striping of the active logs is nowadays not recommended, as devices are so much faster, but striping and/or compression of the archive logs is still recommended if space is an issue.

From the Db2 Documentation:

In most cases, do not stripe active log data sets. You can use DFSMS to the stripe the logs, but striping is generally unnecessary with the latest devices. Striping increases the number of I/Os, which can increase CPU time and lead to potentially greater DB2 commit times. Striping might improve the performance of batch insert jobs, but it might also harm the performance of online transaction processing. Striping is especially risky for performance if you replicate the logs over long distances.
Consider striping and compressing archive log data sets by using DFSMS. Doing so might speed up the time to offload the logs and the time to recover by using archive logs. However, the performance of DFSMS striping and compression depends on the z/OS release and the types of hardware that you use.

Check your CHECKPOINTS

Getting back to the output of the DSNJU004 job, a few more things like conditional restart recs etc. are output and then the Checkpoint Queue:

 

                    CHECKPOINT QUEUE                
                08:00:31 NOVEMBER 06, 2017          
TIME OF CHECKPOINT       07:31:03 NOVEMBER 06, 2017 
BEGIN CHECKPOINT RBA            00000000001355B750DC
END CHECKPOINT RBA              00000000001355B81B18
END CHECKPOINT STCK             00D365BB89179A529200
TIME OF CHECKPOINT       20:54:23 NOVEMBER 04, 2017 
BEGIN CHECKPOINT RBA            000000000013538813C5
END CHECKPOINT RBA              00000000001353889D76
END CHECKPOINT STCK             00D363EB5D1777CFB000
TIME OF CHECKPOINT       12:32:09 NOVEMBER 03, 2017 
BEGIN CHECKPOINT RBA            00000000001351723326
END CHECKPOINT RBA              0000000000135172D899
END CHECKPOINT STCK             00D362393D4E6F748C00
TIME OF CHECKPOINT       07:30:11 NOVEMBER 03, 2017 
BEGIN CHECKPOINT RBA            0000000000134F5915DD
END CHECKPOINT RBA              0000000000134F59C445
END CHECKPOINT STCK             00D361F5BEB5A941CA00
TIME OF CHECKPOINT       07:42:59 NOVEMBER 01, 2017 
BEGIN CHECKPOINT RBA            0000000000134D34EB53
END CHECKPOINT RBA              0000000000134D357022
END CHECKPOINT STCK             00D35F74E054AE870800

You can see how busy my Db2 system is, anyway your data will be a lot different I am sure!

Checkpoint ZPARMS of interest


CHKTYPE 

LOGRECS/MINUTES/BOTH         MINUTES is default and is what I recommend. This parameter can still have the old value “SINGLE” which then respects the CHKFREQ value as if it were minutes or log records depending on the size of the Number. Change this to Minutes!


CHKFREQ

If LOGRECS how many log records before a checkpoint 1,000 – 16,000,000
If MINUTES how many minutes before a checkpoint 1 – 60


CHKLOGR

If BOTH how many log records before a checkpoint 1,000 – 99,999,999


CHKMINS

If BOTH how many minutes before doing a Checkpoint. 1 – 1439


IBM, and I, recommend a checkpoint every five minutes or so to get optimum rollback and log usage. Naturally your mileage may vary. Also remember that Db2 writes a checkpoint at active log change as well, so try and guarantee that at least ten checkpoints’ “worth” of data fits onto one log dataset.

One extra note arrived in Db2 11:

If the CHECKPOINT TYPE field is set to BOTH, DB2 prevents log checkpoints from being taken too frequently and degrading performance. Log checkpoints are scheduled, but not necessarily immediately taken, when the number of minutes that passes reaches the value of CHKMINS.


Following all this advice should mean that your logs settle down and your system runs a little bit smoother!

 

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

 

Roy Boxwell

2017-03 Db2 11 RBA/LRSN Migration 6 to 10 bytes

Db2 11 RBA/LRSN Migration 6 to 10 bytes: How long do you think it will take you to get from 6 – 10 Bytes and Simple/Segmented to UTS’s?
Are you thinking days, months, years??

This month, I want to discuss the pretty big changes that appeared in Db2 11, as these have been made even more important with Db2 12 coming around the corner.

End-of-Time

In Db2 11, the so-called end-of-time or end-of-log problem finally got addressed. The old “it will never run out” six byte RBA (2 to the power of 48 or 256TB) was extended on the left with four bytes of zeroes up to 10 bytes, and the “valid until 17th September 2042 and incremented every 16 microseconds” six byte LRSN was also extended on the left with one byte, and on the right with three bytes. This means the RBA can address 2 to the power of 80 or one Yotta Byte and the LRSN goes up to about the year 36,000. More importantly it goes down to nearly the picosecond!

2042! I will be retired by then…

Well, hold your horses! The end-of-time problem can occur way earlier than that, due to an idiosyncrasy of upgrading to datasharing from non-datasharing. To do this, Db2 must transform your current 6 Byte RBA into a 6 byte LRSN. Naturally an LRSN is a date/time and the RBA just a byte address, so Db2 basically rounds the RBA up to be an LRSN and adds a so-called “DELTA” value to the BSDS. This delta value can surprise you, badly! One of my customers upgraded a schooling system and has now found out that their “end-of-time” is May 2018… Whoops!

This delta value can surprise you, badly!
One of my customers upgraded a schooling system and has now found out that their “end-of-time” is May 2018…

 

Check your delta!

Just run a DSNJU004 on your BSDS and check for the STCK TO LRSN DELTA line:

DSNJCNVT CONVERSION PROGRAM HAS RUN   DDNAME=SYSUT1                  
   LOG MAP OF BSDS DATA SET COPY 1, DSN=SB10.BSDS01
   LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT.
         DATA SHARING MODE IS ON
         SYSTEM TIMESTAMP   - DATE=2017.079  LTIME=19:29:46.01
         UTILITY TIMESTAMP  - DATE=2016.071  LTIME=18:19:43.66 
         VSAM CATALOG NAME=SB10
         HIGHEST RBA WRITTEN       000000000000FCD54000 2017.079 20:29:46.0
         HIGHEST RBA OFFLOADED     000000000000FBF0AFFF
         RBA WHEN CONVERTED TO V4  00000000000000000000
         MAX RBA FOR TORBA         00000000000000000000
         MIN RBA FOR TORBA         00000000000000000000
         STCK TO LRSN DELTA        00000000000000000000

Here in one of our baby datasharing systems there is no delta, so I can retire!

Roadworks ahead! DB2 11 RBA- LRSN Migration 6 to 10 Bytes - Db2 zOS

I like to think of the RBA/LRSN like a three lane German highway (so six lanes in total) that is getting widened to five lanes (so ten in total) You just *know* that the throughput will go up and the traffic jams will go down!

Will it really help?

Well, the out-of-the-box benefits are threefold:

 

  1. No LRSN “spin” – In datasharing a member must wait or “spin”, for some styles of inserts/updates, until it gets a unique LRSN. Now, with faster and faster machines, CPU is being wasted doing nothing but spinning its heels! The IBM Labs state that the percentage overhead ranges between 0% – 6%, and that heavy batch can be much more – even as much as 20%+
  2.  All of the conversion from and to is then gone. Externally always 10, internally a mix. The puffing up and the shrinking down also takes CPU cycles
  3. Converting Logs and tables “may yield a few percent” performance improvement – This again from the IBM Labs.

Road map required!

How to get there? Well first, in NFM, migrate the BSDS from each member one at a time and when all members are done, analyze your workload and pick the biggest usage of UPDATE/DELETE. REORG these objects at the TP/TS level. This gives the biggest improvement earliest.

Rolling on the REORGs

Then trickle through the REORGS on *all* user objects. Here *all* means *all* ! Well actually not all … clone tables cannot be migrated to 10 bytes so you must drop the clone table, REORG the base tablespace and then recreate the clone table.

Remember here to make sure your ZPARMS (OBJECT_CREATE_FORMAT EXTENDED and UTILITY_OBJECT_CONVERSION NOBASIC) are correct and that the Reordered Row Format (RRF) is enabled, since Basic Row Format (BRF) is deprecated!

Forward thinking!

Other things to plan, and think about, at the same time are:

Migrate all INDEX based Partitioned Objects to TABLE based Partitioned Objects and then migrate these to be partitioned by Range PBR (UTS). First an ALTER INDEX xxx.yyy NOT CLUSTER to make it table based instead of index based. Then an ALTER TABLESPACE xx.yy SEGSIZE nn to make it into a PBR (UTS)
Migrate any single table simple/segmented objects to be Partitioned by Growth PBG (UTS). Just an ALTER TABLESPACE xx.yy MAXPARTITIONS 1 is enough
Note that these simple ALTERS cause PACKAGE Invalidation and so must be timed correctly.

Why all the fuss?

Why do this? Well, remember that the UTS, as the underlying tablespace, is the *only* space where Db2 is adding new functionality and performance. Think about inline LOBs, especially with COMPRESS YES, FTB in Db2 12, HASH method, Fast Insert etc. It is clearly the aim of IBM to go to a purely UTS world at some point.

Relative Page Numbering – Should I wait?

In Db2 12, the Relative Page Numbering (RPN) system enables varying sizes of DSSIZE, also at the Index Level. This is seriously good news, *but* it requires a TS level REORG to get there, and you must be in Db2 12. So if you have *vast* history style partitioned tables, you would wait until you are in Db2 12 before you actually do the first big REORG, including going to 10 byte RBA/LRSN if space and time are a big worry.

Are you alone?

No, of course not! Our product RealTimeDatabaseExpert (RTDX) can do all this for you. We also guarantee that no object will be left “hanging”, due to using our BatchControl technology. We also use look-ahead features with time windows to check that we are *not* going to encroach into productive timeframes. If you also have our WorkLoadExpert (WLX) software installed, we can even pick the right moment to fire off the REORGs that you need when you can actually run them! This permanent trickle of reorgs means that within a few years you are done and ready to take advantage of that ten-lane Autobahn!

 

This newsletter is interrupted by our marketing department who are calling out

By the way, this tailored solution for this special requirement is also available for rent!”


DB2 11 RBA- LRSN Migration 6 to 10 Bytes - Db2 zOS

See our RTDX Flyer and RealTime DBAExpert page


Now back to our usual programming…

How long do you think it will take you to get from 6 – 10 Bytes and Simple/Segmented to UTS’s? Are you thinking days, months, years?? I would love to know! To help you in this phase here are some little SQLs that will give you an idea of how far you have to go…based on the assumption that anything REORGed in the last six months will probably get REORGed again.

SELECT  COUNT(*)                          AS INDEXPARTS_TOTAL
FROM SYSIBM.SYSINDEXSPACESTATS;
SELECT  COUNT(*)                          AS INDEXPARTS
      , SUM(COALESCE(NACTIVE, 0))         AS NACTIVE
      , SUM(COALESCE(REORGINSERTS, 0))    AS REORGINSERTS
      , SUM(COALESCE(REORGDELETES, 0))    AS REORGDELETES
      , SUM(COALESCE(REORGMASSDELETE, 0)) AS REORGMASSDELETE
FROM SYSIBM.SYSINDEXSPACESTATS
WHERE (REORGLASTTIME IS NULL
  OR (REORGLASTTIME IS NOT NULL
  AND REORGLASTTIME < CURRENT TIMESTAMP - 6 MONTHS))
  OR
       (REORGLASTTIME IS NOT NULL
   AND (LOADRLASTTIME IS NULL
    OR (LOADRLASTTIME IS NOT NULL
    AND LOADRLASTTIME < CURRENT TIMESTAMP - 6 MONTHS)))
;                                                          
SELECT  COUNT(*)                          AS TABLEPARTS_TOTAL
FROM SYSIBM.SYSTABLESPACESTATS
; 
SELECT  COUNT(*)                          AS TABLEPARTS
      , SUM(COALESCE(NACTIVE, 0))         AS NACTIVE
      , SUM(COALESCE(REORGINSERTS, 0))    AS REORGINSERTS
      , SUM(COALESCE(REORGUPDATES, 0))    AS REORGUPDATES
      , SUM(COALESCE(REORGDELETES, 0))    AS REORGDELETES
      , SUM(COALESCE(REORGMASSDELETE, 0)) AS REORGMASSDELETE
FROM SYSIBM.SYSTABLESPACESTATS
WHERE (REORGLASTTIME IS NULL
  OR (REORGLASTTIME IS NOT NULL
  AND REORGLASTTIME < CURRENT TIMESTAMP - 6 MONTHS))
  OR (REORGLASTTIME IS NOT NULL
   AND (LOADRLASTTIME IS NULL
    OR (LOADRLASTTIME IS NOT NULL
    AND LOADRLASTTIME < CURRENT TIMESTAMP - 6 MONTHS)))    
;                                                           

Naturally if you are *in* Db2 11 you can easily extend these queries to give you proper feedback like:

SELECT  COUNT(*)                          AS INDEXPARTS 
      , SUM(CASE WHEN B.RBA_FORMAT = 'E' 
                       THEN 1 ELSE 0 END) AS INDEXPARTS_EXTENDED
      , SUM(CASE WHEN B.RBA_FORMAT = 'U'  
                       THEN 1 ELSE 0 END) AS INDEXPARTS_DEFINE_NO 
      , SUM(CASE WHEN B.RBA_FORMAT = 'B' 
                       THEN 1 ELSE 0 END) AS INDEXPARTS_BASIC
      , SUM(CASE WHEN B.RBA_FORMAT = ' ' 
                       THEN 1 ELSE 0 END) AS INDEXPARTS_MIGRATED
FROM SYSIBM.SYSINDEXSPACESTATS A
    ,SYSIBM.SYSINDEXPART       B
WHERE A.CREATOR   = B.IXCREATOR
  AND A.NAME      = B.IXNAME
  AND A.PARTITION = B.PARTITION 
;                                                                    
SELECT  B.RBA_FORMAT                               AS INDEXPART_FORMAT
      , COUNT(*)                                   AS INDEXPARTS 
      , SUM(1E00 * COALESCE(A.NACTIVE, 0))         AS NACTIVE
      , SUM(1E00 * COALESCE(A.REORGINSERTS, 0))    AS REORGINSERTS
      , SUM(1E00 * COALESCE(A.REORGDELETES, 0))    AS REORGDELETES
      , SUM(1E00 * COALESCE(A.REORGMASSDELETE, 0)) AS REORGMASSDELETE
FROM SYSIBM.SYSINDEXSPACESTATS A                                    
    ,SYSIBM.SYSINDEXPART       B
WHERE A.CREATOR   = B.IXCREATOR 
  AND A.NAME      = B.IXNAME 
  AND A.PARTITION = B.PARTITION
  AND B.RBA_FORMAT IN ('B' , ' ')
  AND ((A.REORGLASTTIME IS NULL 
  OR (A.REORGLASTTIME IS NOT NULL
  AND A.REORGLASTTIME < CURRENT TIMESTAMP - 6 MONTHS))
  OR                                                                 
    (A.REORGLASTTIME IS NOT NULL
    AND (A.LOADRLASTTIME IS NULL
    OR (A.LOADRLASTTIME IS NOT NULL 
    AND A.LOADRLASTTIME < CURRENT TIMESTAMP - 6 MONTHS))))
  GROUP BY B.RBA_FORMAT
;                                                                    

SELECT  COUNT(*)                          AS TABLEPARTS 
      , SUM(CASE WHEN B.RBA_FORMAT = 'E' 
                       THEN 1 ELSE 0 END) AS TABLEPARTS_EXTENDED 
      , SUM(CASE WHEN B.RBA_FORMAT = 'U' 
                       THEN 1 ELSE 0 END) AS TABLEPARTS_DEFINE_NO 
      , SUM(CASE WHEN B.RBA_FORMAT = 'B'
                       THEN 1 ELSE 0 END) AS TABLEPARTS_BASIC 
      , SUM(CASE WHEN B.RBA_FORMAT = ' ' 
                       THEN 1 ELSE 0 END) AS TABLEPARTS_MIGRATED
      FROM SYSIBM.SYSTABLESPACESTATS A   
          ,SYSIBM.SYSTABLEPART       B 
WHERE A.DBNAME    = B.DBNAME 
  AND A.NAME      = B.TSNAME 
  AND A.PARTITION = B.PARTITION 
;                                                             
SELECT  B.RBA_FORMAT                        AS TABLEPART_FORMAT
      , COUNT(*)                            AS TABLEPARTS
      , SUM(COALESCE(A.NACTIVE, 0))         AS NACTIVE
      , SUM(COALESCE(A.REORGINSERTS, 0))    AS REORGINSERTS
      , SUM(COALESCE(A.REORGUPDATES, 0))    AS REORGUPDATES
      , SUM(COALESCE(A.REORGDELETES, 0))    AS REORGDELETES
      , SUM(COALESCE(A.REORGMASSDELETE, 0)) AS REORGMASSDELETE
FROM SYSIBM.SYSTABLESPACESTATS A
    ,SYSIBM.SYSTABLEPART       B
WHERE A.DBNAME    = B.DBNAME
  AND A.NAME      = B.TSNAME
  AND A.PARTITION = B.PARTITION
  AND B.RBA_FORMAT IN ('B' , ' ')
  AND ((A.REORGLASTTIME IS NULL
  OR (A.REORGLASTTIME IS NOT NULL
  AND A.REORGLASTTIME < CURRENT TIMESTAMP - 6 MONTHS))     
  OR                                                          
     (A.REORGLASTTIME IS NOT NULL                             
   AND (A.LOADRLASTTIME IS NULL                               
    OR (A.LOADRLASTTIME IS NOT NULL                           
    AND A.LOADRLASTTIME < CURRENT TIMESTAMP - 6 MONTHS))))    
GROUP BY B.RBA_FORMAT                                         
;

Here the RBA_FORMAT value “E” is extended (10 Bytes), “U” is DEFINE NO but if you have the ZPARM OBJECT_CREATE_FORMAT set to EXTENDED (which you should!) then when an insert happens it will get created as extended. The problem children are “B” and Blank.

One late bit of info that came from a reader that some of you might find interesting, or perhaps worrying, is this:

There is a potentially time-consuming pre-requisite to the migration and that is the changes made to InfoSphere Information Replication 10.2.1 which expand two columns COMMITSEQ and INTENTSEQ from 10 to 16 bytes to accommodate 10-bytes RBA/LRSN. This modification has a big impact on application programs that make use of these two columns, especially COBOL programs that must be changed and recompiled and regression tested; as well as all SORT statement on the data files extract that include these columns, etc.

 

As usual any comments or criticisms are greatly appreciated

TTFN, Roy Boxwell

 

Share it!

2016-07 – Migration Planning : Stuck in a BIND

Migration planning to DB2 11 z/OS:

Two queries to list the NULLID and the PLAN packages that need a DB2 REBIND

I was at a customer site the other week and we were getting weird data out of the IFCIDs to read the Dynamic Statement Cache (DSC) – in fact, it was ending with an RC 00E60833, which was very odd as that has nothing to do with the DSC!

Old Structure problem?

Anyway, I had a hunch that perhaps old structures were causing grief as, at this site, the problems were happening only for about four hours overnight; never during peak load and absolutely never, ever in Test or QA.

HealthCheck time!

So I ran our little PerformanceHealthCheck freeware and, among a ton of other info, got the following data:

PHC009W Packages last bound in DB2 V0710         :     358

Now this is OK, as long as they are *not* being executed! So, looking into our WorkloadExpert tool, we could see that there were lots of SQLs—both static and dynamic—that were running in these DB2 V7 last bound packages.

NULLID Problem?

What was especially worrying, was the high number of NULLID packages that were in the list. These NULLID Collections are normally used by JAVA and other remote access routines. They are normally always allocated and in-use—making a REBIND a bit of a challenge. I wrote a little SQL to list out the possible bad guys:

SELECT SUBSTR(A.COLLID, 1, 18) AS COLLID 
, SUBSTR(A.NAME, 1, 8) AS PACKAGE
, A.RELBOUND 
, A.LASTUSED
FROM SYSIBM.SYSPACKAGE A 
WHERE A.RELBOUND IN (' ', 'K', 'L', 'M', 'O')
AND   A.LASTUSED > CURRENT DATE - 14 DAYS 
ORDER BY A.LASTUSED DESC, A.NAME 
WITH UR
;

The contents of the column RELBOUND are:

Blank is before DB2 V7, K is DB2 V7, L is DB2 V8, M is DB2 9, O is DB2 10, and P is DB2 11.

Of course your query must be changed, depending on what level of DB2 system you are currently running on. If running on a DB2 10 NFM then remove the ‘O’ in the IN list. It will then return all executed (so in use!) packages that have run within the last 14 days – Feel free to change the ORDER BY too, if you like.

Our output was quite scary as lots of packages were there.

REBIND the world

The DBAs rebound all of the static packages that had been executed this year, (not quite the world, but still way more than I thought could still be running anywhere in the world!) and then, on a Sunday morning, when they could “break in” – they rebound all of the NULLID packages on the list.

Since then – no problem! But is this the end of my story? Naturally not!

Now this particular customer is on DB2 10 NFM, and so these very old structures made we wonder…I did a post on LISTSERV about this. Pat Bossman, from IBM, answered (Corrections from Pat came in a later mail and I have cut out some extra clutter) :

Hello,

For migration to DB2 10, packages last bound [before] DB2 6 or higher are autobound.

http://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.inst
/src/tpc/db2z_relincompatapplsqlfromv9.dita?lang=en

For migration to DB2 11, it’s [before] Version 9 and higher.

http://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/com.ibm.db2z11.doc.inst/
src/tpc/db2z_relincompatapplsqlfromv10.dita?lang=en

With RELBOUND of V7, you should hit it in on migration to DB2 11. So yes, you should REBIND those packages on DB2 10 to avoid post-migration autobind.

I’m looking into the nullid package issue. The assumption if a structure does not have embedded SQL it does not require REBIND is false. We still require PLAN REBINDs, or they also are autobound on migration – even if the PLAN does not have DBRMs anymore. There are still structures in there that need updating.

(Don’t forget about PLAN REBINDs!)

Best regards,

Pat

PLAN Ahead

So now you know! Even PLANs should get a REBIND *before* you migrate, here you can use our EarlyPrecheck or BIX software to aid in the pre-migration checklist, in order to stop any AUTOBIND funnies that could occur. In fact, I would recommend REBINDing the PLAN after successful completion of the DB2 release migration. It will save you a step in a year or two.

 

Finally I wrote another little SQL to give the “overview” of all “old” PLANs:

-- LIST OUT ALL EXECUTED PLANS BOUND IN "OLDER VERSIONS"
 -- OF DB2
 SELECT C.NAME AS PLAN
 FROM SYSIBM.SYSPLAN     C
 WHERE C.RELBOUND IN (' ', 'K', 'L', 'M', 'O'))
 ORDER BY 1
 WITH UR
 ;

(Again, remove the ‘O’ is you are in DB2 10 NFM.) This then gives you, hopefully, a small list of PLANs that need a REBIND and you are ready to go!

 

I hope you liked this month’s topic.

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

 

2016-04 DB2 SYSCOPY – Do you know what is in it? Redux

DB2 10, DB2 11 Migration & DB2 Database maintenance

A special query to check and clear out the DB2 Catalog

 

This is a redux of my original Newsletter from February 2013 with additional data at the very end – The reason for this relaunch? Simple – I have had numerous emails about this topic, especially with regard to migrating from DB2 10 to DB2 11, and it is obviously still very important to check and to clear out *before* you start your DB2 migration. So now enjoy the redux version…

 

DB2 Database Maintenance and the DB2 Catalog

If you have written your own DB2 database maintenance programs then you almost certainly run SQL queries against the DB2 Catalog. If you are also checking for Incremental Image Copies (IIC) or Full Image Copies (FIC) then you will probably be using a mix of Real-Time Statistics tables (RTS) and the SYSIBM.SYSCOPY to figure out which type of utility to generate. Further if you are in DB2 10 (any mode! CM8, CM9, or NF) then this newsletter is for you!

 

I had a problem in one of our test centers with a cursor that I noticed was taking a long time to finish and so I went into our Analyze+ tool and extracted the EDM Pool data (this is the new data in DB2 10 NF that is synonymous with the Dynamic Statement Cache counters and statistics) and sorted by Total Elapsed Time descending to get this:

 

Analyze+ for DB2 z/OS ----- EDM Pool (6/12) -------- Stmt 1 from 316 
Command ===>                                         Scroll ===> CSR  
                                                           DB2: QA1B 
Primary cmd: END, SE(tup), Z(oom), L(ocate) 
             total elapse time                 
Line    cmd: Z(oom), A(nalyze), D(ynamic Analyze), E(dit Statement), 
             P(ackage), S(tatement Text)
                                                               
                     Total       Average         Total       Average
     StmtID   Elapsed Time  Elapsed Time      CPU Time      CPU Time
----------- HHHH:MM:SS.ttt HHH:MM:SS.ttt HHH:MM:SS.ttt HHH:MM:SS.ttt
     115967    1:28.107705     29.369235   1:12.151391     24.050464
     114910       8.367834      0.000331      6.779229      0.000268
      79642       7.998559      0.054412      6.346829      0.043176
     114907       5.760045      0.000238      4.378691      0.000181
     115974       5.031890      2.515945      2.937258      1.468629
       5439       4.037261      0.000739      2.685938      0.000492

 

Over one hour total and over 29 minutes average for our small amount of test data set alarm bells ringing – so I drilled down to the SQL:

 

Analyze+ for DB2 z/OS -- View EDM-Pool Statement LINE 00000001 COL 001 080
Command ===>                                             Scroll ===> CSR
                                                         DB2: QA1B
Primary cmd: END
Collection:RTDX0510_PTFTOOL
Package   :M2DBSC09 
Contoken  :194C89620AE53D88  PrecompileTS: 2012-10-29-15.34.40.938230
StmtID    :          115967  StmtNo      :      1223  SectNo:       2
---------------------------------------------------------------------
DECLARE
  SYSCOPY-IC-MODI-9N
CURSOR WITH HOLD FOR 
SELECT                                                                       
  T1.N1 , T1.N2 , T1.N3 , T1.N4 , T1.N5 , T1.N6 , T1.N7 , T1.N8 , T1.N9 
  , T1.N10 , T1.N11 , T1.N12 
FROM (
  SELECT       
    ICTS.DBNAME AS N1
  , ICTS.TSNAME AS N2
  , ICTS.TIMESTAMP AS N3
  , ' ' AS N4
  , ICTS.DSNUM AS N5
 , ICTS.ICTYPE AS N6
 , DAYS ( :WORK-CURRENT-DATE ) - DAYS ( ICTS.TIMESTAMP ) AS N7
 , ICTS.OTYPE AS N8
 , ICTS.DSNAME AS N9
 , ICTS.ICUNIT AS N10
 , ICTS.INSTANCE AS N11
 , ICTS.STYPE AS N12                                                                    
  FROM SYSIBM.SYSCOPY ICTS                                                     
  WHERE ICTS.ICBACKUP IN ( '  ' , 'LB' , 'FC' )                            
  AND  ICTS.OTYPE = 'T' 
  UNION                    
   SELECT                           
    ICIX.DBNAME AS N1
  , CAST ( TABLES.TSNAME AS CHAR ( 8 ) CCSID EBCDIC ) AS N2
  , ICIX.TIMESTAMP AS N3
  , ICIX.TSNAME AS N4
  , ICIX.DSNUM AS N5
  , ICIX.ICTYPE AS N6
  , DAYS ( :WORK-CURRENT-DATE ) - DAYS ( ICIX.TIMESTAMP ) AS N7
  , ICIX.OTYPE AS N8
  , ICIX.DSNAME AS N9
  , ICIXS.ICUNIT AS N10
  , ICIX.INSTANCE AS N11
  , ICIX.STYPE AS N12
   FROM SYSIBM.SYSCOPY ICIX
      , SYSIBM.SYSINDEXES INDEXES
      , SYSIBM.SYSTABLES TABLES           
   WHERE ICIX.ICBACKUP IN ( '  ' , 'LB' , 'FC' )                          
   AND  ICIX.OTYPE = 'I' 
   AND VARCHAR ( ICIX.DBNAME , 24 ) = INDEXES.DBNAME       
   AND VARCHAR ( ICIX.TSNAME , 24 ) = INDEXES.INDEXSPACE 
   AND INDEXES.TBNAME = TABLES.NAME
   AND INDEXES.TBCREATOR = TABLES.CREATOR           
   AND  TABLES.TYPE IN ( 'H' , 'M' , 'P' , 'T' , 'X' ) )    
 AS T1                                                   
 ORDER BY CAST (T1.N1 AS CHAR ( 8 ) CCSID EBCDIC )
        , CAST (T1.N2 AS CHAR ( 8 ) CCSID EBCDIC )
        , N3 DESC 
 FOR FETCH ONLY       
 WITH UR                                                             



 HOSTVARIABLE NAME             NULLABLE  TYPE           LENGTH  SCALE
 ---------------------------  --------  --------------  -----  -----
 WORK-CURRENT-DATE             NO        CHAR              26       
 WORK-CURRENT-DATE             NO        CHAR              26       
******************************** Bottom of Data **********************

 

Ok, ok this SQL is not going to win a beauty contest any day soon but it used to run just fine…so now I explained it:

Analyze+ for DB2 z/OS -- Explain Data (1/6) --------- Entry 1 from 7  
Command ===>                                         Scroll ===> CSR   
EXPLAIN: DYNAMIC     MODE: CATALOG                         DB2: QA1B  
Primary cmd: END, T(Explain Text), V(iolations), R(unstats), 
             P(redicates), S(tatement Text), C(atalog Data),
             M(ode Catalog/History),Z(oom), PR(int Reports), 
             SAVExxx, SHOWxxx
             
Line    cmd: Z(oom), C(osts), I(ndexes of table), S(hort catalog),
             T(able), V(irtual indexes of table), 
             X(IndeX)                               
Collection : RTDX0510_PTFTOOL   Package : M2DBSC09   Stmt :     1223
Version    : - NONE -         
Milliseconds:  77519  Service Units: 220222  Cost Category: B
                                                                                
  QBNO QBTYPE CREATOR  TABLE NAME       MTCH IX METH PRNT TABL PRE  MXO 
  PLNO TABNO  XCREATOR INDEX NAME ACTYP COLS ON OD   QBLK TYPE FTCH PSQ 
  ---- ------ -------- ---------- ----- ---- -- ---- ---- ---- ---- --- 
     1 SELECT R510PTFT T1         R        0 N     0    0 W    S      0 
     1 5 
     1 SELECT                              0 N     3    0 -           0 
     2 0  
     2 UNION                               0       3    1 -           0 
     1 0 
     3 NCOSUB SYSIBM   SYSCOPY    R        0 N     0    2 T    S      0 
     1 1         
     4 NCOSUB SYSIBM   SYSCOPY    R        0 N     0    2 T    S      0 
     1 2        
     4 NCOSUB SYSIBM   SYSINDEXES I        2 N     1    2 T           0 
     2 3      SYSIBM   DSNDXX02       
     4 NCOSUB SYSIBM   SYSTABLES  I        2 N     1    2 T           0 
     3 4      SYSIBM   DSNDTX01      
  ---- ------ -------- ---------- ----- ---- -- ---- ---- ---- ---- ---

 

This is *after* I had REORGed the SYSCOPY, SYSTSIXS and SYSTSTAB and then run the RUNSTATS on the SYSTSIXS and SYSTSTAB as you cannot do inline RUNSTATS on those two of course!

 

Two tablespace scans against the SYSCOPY is not brilliant of course but in this system we only have 4,000 table spaces and 2,500 indexes… so then I used the Catalog primary command to have another look at the catalog data:

TS   : DSNDB06 .SYSCOPY 
Stats: 2013-02-04-10.49.32.600316 
  Partitions:  0 , Tables: 1 , NACTIVEF: 18.272 pages 
  Type      :  Neither a LOB nor a MEMBER CLUSTER.
  RTS data TOTALROWS : 347.087 , Pages: 18.268
              
Table: SYSIBM.SYSCOPY 
Stats: 2013-02-04-10.49.32.600316
  No. of rows (CARDF): 347.082 , Pages: 18.268 
  Index: SYSIBM.DSNUCH01 
  Stats: 2013-02-04-10.49.32.600316     Type: Type-2 index
    Levels: 3 , Leaf pages: 3.945 
    FIRSTKEYCARDF: 101 , FULLKEYCARDF: 347.082 
    RTS data Levels: 3 , Leaf pages: 3.945 , TOTALENTRIES: 347.087 
    CLUSTERING: Y , CLUSTERED: Y , CLUSTERRATIO = 100,00% 
    DATAREPEATFACTORF: 18.268 
    Indexcolumn  ! Format        ! Dist. Values ! A/D ! NL ! Stats  
    -------------+---------------+--------------+-----+----+-------
    DBNAME       ! CHAR(8)       !          101 ! ASC ! N  ! OK  
    TSNAME       ! CHAR(8)       !          712 ! ASC ! N  ! OK  
    START_RBA    ! CHAR(6)       !       72.398 ! DSC ! N  ! OK  
    TIMESTAMP    ! TIMESTAMP(6)  !      347.082 ! DSC ! N  ! OK  
                                                                               
  Index: SYSIBM.DSNUCX01                                                 
  Stats: 2013-02-04-10.49.32.600316   Type: Type-2 index 
    Levels: 3 , Leaf pages: 509
    FIRSTKEYCARDF: 1.820 , FULLKEYCARDF: 1.820
    RTS data Levels: 3 , Leaf pages: 509 , TOTALENTRIES: 347.087 
    CLUSTERING: N , CLUSTERED: Y , CLUSTERRATIO = 100,00%
    DATAREPEATFACTORF: 18.275 
    Indexcolumn  ! Format        ! Dist. Values ! A/D ! NL ! Stats 
    -------------+---------- ----+--------------+-----+----+-------
    DSNAME       ! CHAR(44)      !        1.820 ! ASC ! N  ! OK    
                                                

Here I had a heart attack! 347,087 rows?!?!?!?!?!? How in the wide wide world of sports did that happen? Time to drill down into the contents of SYSCOPY with this little query:

SELECT ICTYPE , STYPE,  COUNT(*) 
FROM SYSIBM.SYSCOPY  
GROUP BY ICTYPE , STYPE            
;

Which returned these rather surprising results:

---------+---------+---------+
ICTYPE  STYPE                          
---------+---------+---------+
A       A                4             
B                       46             
C       L             1669             
C       O                4             
F                      100             
F       W               16             
I                        4             
L       M           344723             
M       R               18             
R                      151             
S                       62             
W                       18             
W       S                1             
Y                        2             
Z                      269             
DSNE610I NUMBER OF ROWS DISPLAYED IS 15

The L and M combination appears 344,723 times!!!

Grab your handy DB2 10 SQL reference and page on down to DB2 Catalog tables, SYSIBM.SYSCOPY and you will see:

ICTYPE CHAR(1) NOT NULL

Type of Operation:

A ALTER
B REBUILD INDEX
C CREATE
D CHECK DATA LOG(NO) (no log records for the range are available for RECOVER utility)
E RECOVER (to current point)
F COPY FULL YES
I COPY FULL NO
L SQL (type of operation)
M MODIFY RECOVERY utility
P RECOVER TOCOPY or RECOVER TORBA (partial recovery point)
Q QUIESCE
R LOAD REPLACE LOG(YES)
S LOAD REPLACE LOG(NO)
T TERM UTILITY command
V REPAIR VERSIONS utility
W REORG LOG(NO)
X REORG LOG(YES)
Y LOAD LOG(NO)
Z LOAD LOG(YES)

Now in my version the L entry has a ‘|’ by it to signify it is new. Scroll on down further to STYPE to read

STYPE CHAR(1) NOT NULL

Sub-type of operation:

When ICTYPE=L, the value is:

M Mass DELETE, TRUNCATE TABLE, DROP TABLE, or ALTER TABLE ROTATE PARTITION.
The LOWDSNUM column contains the table OBID of the affected table.

So, in other words, every time a program does a MASS DELETE it inserts a row into SYSCOPY. So then I ran another query to see when this all began and, hopefully, ended:

SELECT MAX(ICDATE), MIN(ICDATE) 
FROM SYSIBM.SYSCOPY                   
WHERE ICTYPE = 'L'                    
;                                     
---------+---------+---------+--------
                                      
---------+---------+---------+--------
121107  120828                        
DSNE610I NUMBER OF ROWS DISPLAYED IS 1

 

So we started getting records on the 28th August 2012 and the last one was the 7th November 2012 so in just about ten weeks even we managed 344,723 Mass Deletes!

So now, with my Sherlock Holmes deer stalker hat on, the question was “Why did it stop in November?” Happily we have a history here of APARs and that’s when this PMR bubbled to the Surface:

PM52724: MASS DELETES ENDS UP WITH LOCK ESCALATION ON SYSCOPY IN V10. BECAUSE PM30991 INTALLED CODE INSERTING L 12/01/04 PTF PECHANGE

 

I will let you go and read the text but suffice it to say IBM realized what a disaster this “logging” of Mass Deletes was and HIPERed a quick fix to stop it! Plus you can see the APAR that “brought in the dead mouse” PM30991.

PM30991 UK66327 Closed 2011-03-30

PM52724 UK80113 Closed 2012-07-03

So if you installed the PM30991 and not the PM52724 you probably have some cleaning up to do…

By the way I also rewrote the Ugly Duckling SQL:

  SELECT  T1.N1      
         ,T1.N2         
         ,T1.N3              
         ,T1.N4         
         ,T1.N5        
         ,T1.N6       
         ,T1.N7 
         ,T1.N8      
         ,T1.N9     
         ,T1.N10         
         ,T1.N11        
         ,T1.N12       
    FROM (                                   
   SELECT ICTS.DBNAME    AS N1   
         ,ICTS.TSNAME    AS N2      
         ,ICTS.TIMESTAMP AS N3      
         ,' '            AS N4     
         ,ICTS.DSNUM     AS N5     
         ,ICTS.ICTYPE    AS N6     
         ,DAYS ( :WORK-CURRENT-DATE ) - DAYS ( ICTS.TIMESTAMP ) AS N7
         ,ICTS.OTYPE     AS N8             
         ,ICTS.DSNAME    AS N9               
         ,ICTS.ICUNIT    AS N10              
         ,ICTS.INSTANCE  AS N11              
         ,ICTS.STYPE     AS N12              
     FROM SYSIBM.SYSCOPY ICTS                
    WHERE ICTS.ICBACKUP IN ('  ','LB','FC')  
      AND ICTS.OTYPE    = 'T'
UNION ALL    
   SELECT ICIX.DBNAME     AS N1    
         ,CAST(TABLES.TSNAME                 
          AS CHAR(8) CCSID EBCDIC) AS N2  
         ,ICIX.TIMESTAMP  AS N3      
         ,ICIX.TSNAME     AS N4
         ,ICIX.DSNUM      AS N5              
         ,ICIX.ICTYPE     AS N6              
         ,DAYS ( :WORK-CURRENT-DATE ) - DAYS ( ICIX.TIMESTAMP ) AS N7
         ,ICIX.OTYPE      AS N8              
         ,ICIX.DSNAME    AS N9               
         ,ICIX.ICUNIT    AS N10              
         ,ICIX.INSTANCE  AS N11              
        ,ICIX.STYPE     AS N12                
    FROM SYSIBM.SYSCOPY ICIX                  
        ,SYSIBM.SYSINDEXES INDEXES            
        ,SYSIBM.SYSTABLES TABLES              
   WHERE ICIX.ICBACKUP IN ('  ','LB','FC')    
     AND ICIX.OTYPE        = 'I'              
     AND ICIX.DBNAME      = INDEXES.DBNAME    
     AND ICIX.TSNAME      = INDEXES.INDEXSPACE
     AND INDEXES.TBNAME    = TABLES.NAME      
     AND INDEXES.TBCREATOR = TABLES.CREATOR   
 ) AS T1                                      
ORDER BY CAST(T1.N1 AS CHAR(8) CCSID EBCDIC)  
        ,CAST(T1.N2 AS CHAR(8) CCSID EBCDIC)  
        ,        N3 DESC                      
  FOR FETCH ONLY                              
  WITH UR                                     
  ;

 

To now perform like this:

Milliseconds:  55911  Service Units:   158836  Cost Category: A 
                                                                                 
QBNO QBTYPE CREATOR  TABLE NAME       MTCH IX METH PRNT TABL PRE  MXO 
PLNO TABNO  XCREATOR INDEX NAME ACTYP COLS ON OD   QBLK TYPE FTCH PSQ 
---- ------ -------- ---------- ----- ---- -- ---- ---- ---- ---- ---
   1 NCOSUB SYSIBM   SYSINDEXES I        0 N    0     2 T    S      0 
   1 3      SYSIBM   DSNDXX07   
   1 NCOSUB SYSIBM   SYSTABLES  I        2 N    1     2 T           0 
   2 4      SYSIBM   DSNDTX01
   1 NCOSUB SYSIBM   SYSCOPY    I        2 N    1     2 T    S      0 
   3 2      SYSIBM   DSNUCH01    
   2 UNIONA                              0 N    3     0 -           0 
   1 0   
   5 NCOSUB SYSIBM   SYSCOPY    R        0 N    0     2 T    S      0 
   1 1         
   ------ -------- ----------  ------ ---- -- ---- ---- ---- ----- ---

 

I am sure once I have deleted all the SYSCOPY rows (Note that we do not need to RECOVER on our test machine so I have the luxury of being able to delete the data – You, of course, cannot!) that it will return to being a nice little SQL!

 

After a large DELETE run which left only 2,365 rows followed by a REORG with inline RUNSTATS the original SQL now looks like:

 

Milliseconds:       672  Service Units:       1909  Cost Category: B  
                                                                               
QBNO QBTYPE CREATOR  TABLE NAME       MTCH IX METH PRNT TABL PRE  MXO
PLNO TABNO  XCREATOR INDEX NAME ACTYP COLS ON OD   QBLK TYPE FTCH PSQ
---- ------ -------- ---------- ----- ----- ---- -- ---- ---- ---- --
   1 SELECT R510PTFT T1         R        0 N     0    0 W    S      0
   1 5   
   1 SELECT                              0 N     3    0 -           0
   2 0   
   2 UNION                               0       3    1 -           0
   1 0 
   3 NCOSUB SYSIBM   SYSCOPY    R        0 N     0    2 T    S      0
   1 1  
   4 NCOSUB SYSIBM   SYSCOPY    R        0 N     0    2 T    S      0
   1 2  
   4 NCOSUB SYSIBM   SYSINDEXES I        2 N     1    2 T           0
   2 3      SYSIBM   DSNDXX02                 
   4 NCOSUB SYSIBM   SYSTABLES  I        2 N     1    2 T           0
   3 4      SYSIBM   DSNDTX01      
  ---- ------ -------- ------------------ ----- ---- -- ---- ---- ----

And my version:

Milliseconds:      631  Service Units:     1792  Cost Category: A          
                                                                                
QBNO QBTYPE CREATOR  TABLE NAME       MTCH IX METH PRNT TABL PRE  MXO
PLNO TABNO  XCREATOR INDEX NAME ACTYP COLS ON OD   QBLK TYPE FTCH PSQ
---- ------ -------- ---------- ----- ---- ---- -- ---- ---- ---- ---- 
   1 NCOSUB SYSIBM   SYSCOPY    R        0 N     0    2 T    S      0
   1 2       
   1 NCOSUB SYSIBM   SYSINDEXES I        2 N     1    2 T           0
   2 3      SYSIBM   DSNDXX02  
   1 NCOSUB SYSIBM   SYSTABLES  I        2 N     1    2 T           0
   3 4      SYSIBM   DSNDTX01 
   2 UNIONA                              0 N     3    0 -           0
   1 0             
   5 NCOSUB SYSIBM   SYSCOPY    R        0 N     0    2 T    S      0
   1 1   
   ---- ------ -------- ------------------ ----- ---- -- ---- ---- ----

Doesn’t look quite so impressive now…sniff…sniff

 

Here’s my SYSCOPY query for all cases:

SELECT ICTYPE, STYPE, MIN(ICDATE) AS OLDEST, MAX(ICDATE) AS NEWEST
     , COUNT(*) AS COUNT                                          
FROM SYSIBM.SYSCOPY                                               
GROUP BY ICTYPE , STYPE                                           
;                                                                 
---------+---------+---------+---------+---------+
ICTYPE  STYPE  OLDEST  NEWEST        COUNT           
---------+---------+---------+---------+---------+
A       A      121228  121228            4        
B              121228  130128           46       
C       L      100809  130204         1669    
C       O      120827  120827            4    
F              100809  130204          100     
F       W      100809  130204           16   
I              130131  130204            4
M       R      130102  130131           18         
R              120829  130130          151        
S              120829  130131           62      
W              100809  130204           18       
W       S      100809  100809            1     
Y              120828  120828            2    
Z              120828  130201          269    
DSNE610I NUMBER OF ROWS DISPLAYED IS 14

 

Clean Up

OK, so what can you do if you have 1000’s of these records? Well I would start with MODIFY RECOVER utilities to delete the bad guys. Your “normal” DB2 Database Maintenance jobs should take care of this for you but if you do not run these on a regular basis then start with this query:

SELECT A.DBNAME, A.TSNAME, A.DSNUM                              
      , MAX(DATE(A.TIMESTAMP)), MIN(DATE(A.TIMESTAMP)), COUNT(*)
FROM SYSIBM.SYSCOPY A                                           
WHERE A.ICTYPE = 'L'                                            
  AND A.STYPE  = 'M'                                            
  AND EXISTS (SELECT 1                                          
              FROM SYSIBM.SYSCOPY B                             
              WHERE A.DBNAME = B.DBNAME                         
                AND A.TSNAME = B.TSNAME                         
                AND (A.DSNUM  = B.DSNUM                         
                 OR (A.DSNUM > 0                                
                 AND B.DSNUM = 0 ))                             
                AND A.START_RBA < B.START_RBA                   
                AND B.ICTYPE = 'F')                             
GROUP BY A.DBNAME, A.TSNAME, A.DSNUM                            
FOR FETCH ONLY                                                  
WITH UR                                                         
;

The output shows you the DBNAME, TSNAME, DSNUM and counts for all of the “bad guys” that have at least one Full Image Copy *after* the bad guy was inserted:

---------+---------+---------+---------+---------+---------+---------+
DBNAME    TSNAME          DSNUM                                       
---------+---------+---------+---------+---------+---------+---------+
R510D0PT  R510S04             0  2016-01-26  2015-11-11           75  
R510D0PT  R510S10             0  2016-01-26  2015-11-11           75  
R510D0PT  R510S12             0  2016-01-26  2015-11-11           75  
R510D0PT  R510S14             0  2016-01-26  2015-11-11           75

 

This gives you the needed input to write a simple MODIFY RECOVER utility input like this:

 

MODIFY RECOVERY TABLESPACE R510D0PT.R510S04 DELETE DATE 20151203

Why did I use 2015-12-03 when in the query output I have 2015-11-11? That’s because I do not want to delete *all* of the Image Copy data in SYSCOPY – just the data from 2015-11-11 to 2015-12-02 (remember that MODIFY works up to the date *before* you enter). The output from the MODIFY looked like this:

 

DSNU000I    028 09:10:43.97 DSNUGUTC - OUTPUT START FOR UTILITY, 
            UTILID = PTFMO000MOU001
DSNU1044I   028 09:10:44.16 DSNUGTIS - PROCESSING SYSIN AS EBCDIC
DSNU050I    028 09:10:44.17 DSNUGUTC -  MODIFY RECOVERY TABLESPACE
            R510D0PT.R510S04 DELETE DATE 20151203
DSNU517I    -QA1B 028 09:10:44.28 DSNUMDEL 
            - SYSCOPY RECORD DELETED BY MODIFY UTILITY.
DSN=SETEST.R510D0PT.R510S04.D15142.T0635, VOL=SER=(CATLG), FILESEQNO=0
DSNU575I   -QA1B 028 09:10:44.49 DSNUMODA - MODIFY COMPLETED SUCCESSFULLY
DSNU010I    028 09:10:44.52 DSNUGBAC - UTILITY EXECUTION COMPLETE, 
            HIGHEST RETURN CODE=0

 

Now you must also IDCAMS Delete any, and all, of the dataset names output here.

Now re-running the original query gives:

---------+---------+---------+---------+---------+---------+---------
DBNAME    TSNAME          DSNUM                                      
---------+---------+---------+---------+---------+---------+---------
R510D0PT  R510S04             0  2016-01-26  2015-12-03           53 
R510D0PT  R510S10             0  2016-01-26  2015-11-11           75 
R510D0PT  R510S12             0  2016-01-26  2015-11-11           75 
R510D0PT  R510S14             0  2016-01-26  2015-11-11           75

 

24 type L & M records gone from between 2015-11-11 and 2015-12-02 and of course the object is not in COPY Pending status!

However, there are cases where you cannot do this straightaway (Perhaps never been image copied?). In these cases, the simplest method is to do an Image copy and then a Modify, but if the object is to big then you must make the DB2 Catalog updateable (ask your friendly Sysprog about this,) and simply use an SQL DELETE to remove all of the entries for an object *before* the last Full Image Copy.

Once you have removed all the fluff, remember to REORG SYSCOPY and RUNSTAT the indexes. Then it should be a *lot* thinner and make migrating a tad easier!

 

As always if you have any comments or questions please email me!

TTFN

Roy Boxwell

 

2016-2 Real Time Statistics (RTS) Revisited – Information missing (part 1)

Easy Real Time Statistics (RTS) data initialization from DB2 9 to DB2 11:

Special query to run before a REORG, RUNSTAT or a DB2 Migration. How many RTS rows did you find?

 

Hands up who knows nothing about the RTS? Good, all hands are down! I had an interesting experience the other day with one of my customers as they are in the process of doing the big bang “REORG the world” to get from a six byte RBA/LRSN to a 10 Byte RBA/LRSN due to problems with data cloning in a mixed DB2 release Environment.

 

RTS Database Maintenance

They use the RTS to drive the creation of REORG, RUNSTAT, and COPY utilities as this is the modern and correct way to go, right? Well, they ended up with a bunch of objects that refused to REORG. I looked high and low for *any* reason as to why they would be excluded from processing and found none. Well, actually, I lie – there was one, and that was the fact that the candidate list was based upon a SELECT from the RTS tables and then joining to the DB2 Catalog to refine the data and then finally generating the required REORG jobs.

 

RTS data missing

It was noticed that these tablespaces were either empty or very small and it was seen that they did not even *exist* in the RTS! Now cast your mind way way way back to DB2 V7 when the RTS were introduced as an “optional” feature. I wrote a little SQL INSERT to populate the RTS for any missing elements as the IBM way of populating the RTS was to “REORG the world” (remember those halcyon days?) Anyway these days, about 11 years later, it is *always* assumed that:

– The RTS data exists
– The RTS data is correct (mainly!)
– RTS data initialization made easy

So, to save you all from trying to find my SQL from those days, here’s the DB2 9 and above version which you can, perhaps must, run to make sure you have no “bodies in the cellar” like my customer did!

 

RTS data initialization made easy

So, to save you all from trying to find my SQL from those days, here’s the DB2 9 and above version which you can, perhaps must, run to make sure you have no “bodies in the cellar” like my customer did!

 

------------------------------------------------------------------------
-- THESE TWO QUERIES WILL FILL THE RTS TABLES SYSIBM.SYSTABLESPACESTATS
-- AND SYSIBM.SYSINDEXSPACESTATS WITH DEFAULT AND, WHEN POSSIBLE,     --
-- WITH CATALOG DATA FOR MISSING ENTRIES                              --
-- (OBJECTS FOUND IN THE CATALOG BUT NOT IN RTS TABLES)               --
------------------------------------------------------------------------
-- LOCK TABLE SYSIBM.SYSTABLESPACESTATS IN EXCLUSIVE MODE ;         

INSERT INTO SYSIBM.SYSTABLESPACESTATS
 (UPDATESTATSTIME,NACTIVE,EXTENTS)
 ,LOADRLASTTIME
 ,REORGLASTTIME,REORGINSERTS,REORGDELETES,REORGUPDATES,REORGUNCLUSTINS
 ,REORGDISORGLOB,REORGMASSDELETE,REORGNEARINDREF,REORGFARINDREF
 ,STATSLASTTIME,STATSINSERTS,STATSDELETES,STATSUPDATES,STATSMASSDELETE
 ,COPYLASTTIME,COPYUPDATEDPAGES,COPYCHANGES
 ,IBMREQD
 ,DBID,PSID,PARTITION,INSTANCE,SPACE,TOTALROWS 
 ,DBNAME,NAME)
 SELECT CURRENT TIMESTAMP 
 ,CASE A.SPACEF 
  WHEN -1 THEN CASE A.SPACE 
               WHEN 0 THEN NULL 
               ELSE A.SPACE / B.PGSIZE 
               END
  ELSE MIN( 2147483647 , ( MAX(A.SPACEF , A.SPACE) / B.PGSIZE ) )
  END
 ,CASE A.EXTENTS
  WHEN -1 THEN NULL
  ELSE A.EXTENTS
  END
  ,TIMESTAMP('0001-01-01-00.00.00.000000')
  ,TIMESTAMP('0001-01-01-00.00.00.000000'), 0 , 0 , 0 , 0   
  , 0 , 0 , 0 , 0  
  ,CASE 
    WHEN A.STATSTIME = TIMESTAMP('0001-01-01-00.00.00.000000') 
    THEN A.STATSTIME 
    WHEN A.STATSTIME < A.CREATEDTS THEN 
                       TIMESTAMP('0001-01-01-00.00.00.000000')
    ELSE A.STATSTIME
    END
  , 0 , 0 , 0 , 0 
  ,TIMESTAMP('0001-01-01-00.00.00.000000'), 0 , 0   
  , 'N'   
  ,B.DBID,B.PSID,A.PARTITION,B.INSTANCE  
  ,CASE A.SPACEF 
    WHEN -1 THEN CASE A.SPACE 
                 WHEN 0  THEN NULL  
                 ELSE A.SPACE 
                 END 
    ELSE MAX( MIN( 2147483647 , A.SPACEF ) , A.SPACE)
    END 
   ,CASE A.CARDF 
    WHEN -1 THEN NULL  
    ELSE A.CARDF 
    END 
   ,A.DBNAME,A.TSNAME  
    FROM SYSIBM.SYSTABLEPART  A 
        ,SYSIBM.SYSTABLESPACE B 
    WHERE NOT EXISTS (SELECT C.*  
                     FROM SYSIBM.SYSTABLESPACESTATS C 
                     WHERE A.DBNAME = C.DBNAME 
                       AND A.TSNAME = C.NAME  
                       AND A.PARTITION = C.PARTITION)    
     AND NOT A.SPACE  = -1   
     AND A.DBNAME     = B.DBNAME   
     AND A.TSNAME     = B.NAME    
  ; 
COMMIT ;
-- LOCK TABLE SYSIBM.SYSINDEXSPACESTATS IN EXCLUSIVE MODE ;
INSERT INTO SYSIBM.SYSINDEXSPACESTATS
 (UPDATESTATSTIME
 ,NLEVELS,NLEAF,NACTIVE,SPACE,EXTENTS
 ,LOADRLASTTIME
 ,REBUILDLASTTIME
 ,REORGLASTTIME,REORGINSERTS,REORGDELETES,REORGAPPENDINSERT
 ,REORGPSEUDODELETES,REORGMASSDELETE,REORGLEAFNEAR,REORGLEAFFAR
 ,REORGNUMLEVELS
 ,STATSLASTTIME,STATSINSERTS,STATSDELETES,STATSMASSDELETE
 ,COPYLASTTIME,COPYUPDATEDPAGES,COPYCHANGES
 ,IBMREQD
 ,DBID,ISOBID,PSID,PARTITION,INSTANCE
 ,TOTALENTRIES,DBNAME,NAME,CREATOR,INDEXSPACE)
  SELECT CURRENT TIMESTAMP
 ,CASE B.NLEVELS
   WHEN -1 THEN NULL
   ELSE B.NLEVELS
   END
  ,CASE B.NLEAF
   WHEN -1 THEN NULL
   ELSE B.NLEAF
   END
  ,CASE A.SPACEF
   WHEN -1 THEN CASE A.SPACE
                WHEN 0  THEN NULL
                ELSE A.SPACE / B.PGSIZE
                END
  ELSE MIN( 2147483647 , ( MAX(A.SPACEF , A.SPACE) / B.PGSIZE ) )
  END
 ,CASE A.SPACEF
  WHEN -1 THEN CASE A.SPACE
               WHEN 0  THEN NULL
               ELSE A.SPACE
               END
  ELSE MAX( MIN( 2147483647 , A.SPACEF ) , A.SPACE)
  END
 ,CASE A.EXTENTS
   WHEN -1 THEN NULL
   ELSE A.EXTENTS
   END
 ,TIMESTAMP('0001-01-01-00.00.00.000000')
 ,TIMESTAMP('0001-01-01-00.00.00.000000')
 ,TIMESTAMP('0001-01-01-00.00.00.000000'), 0 , 0 , 0
 , 0 , 0 , 0 , 0 , 0
 ,CASE
  WHEN A.STATSTIME = TIMESTAMP('0001-01-01-00.00.00.000000')
  THEN A.STATSTIME
  WHEN A.STATSTIME < A.CREATEDTS THEN
                     TIMESTAMP('0001-01-01-00.00.00.000000')
  ELSE A.STATSTIME
  END
 , 0 , 0 , 0
 ,TIMESTAMP('0001-01-01-00.00.00.000000'), 0 , 0
 , 'N'
 ,B.DBID,B.ISOBID, C.PSID
 ,A.PARTITION,C.INSTANCE
 ,CASE A.CARDF
  WHEN -1 THEN NULL
  ELSE A.CARDF
  END
 ,B.DBNAME,B.NAME,B.CREATOR,B.INDEXSPACE
  FROM SYSIBM.SYSINDEXPART  A
      ,SYSIBM.SYSINDEXES    B
      ,SYSIBM.SYSTABLESPACE C
      ,SYSIBM.SYSTABLES     D
 WHERE NOT EXISTS (SELECT E.*
                   FROM SYSIBM.SYSINDEXSPACESTATS E
                   WHERE B.DBNAME = E.DBNAME
                     AND B.INDEXSPACE = E.INDEXSPACE
                     AND A.PARTITION  = E.PARTITION)
   AND B.CREATOR    = A.IXCREATOR
   AND B.NAME       = A.IXNAME
   AND NOT A.SPACE  = -1
   AND B.TBCREATOR  = D.CREATOR
   AND B.TBNAME     = D.NAME
   AND D.DBNAME     = C.DBNAME
   AND D.TSNAME     = C.NAME
;
COMMIT ;

 

It may even be a good idea to run these two queries on a regular basis… just in case!

I would like to know how many rows these queries INSERTed at your shops – Here in Düsseldorf, in the labs, it found two TS’s and three IX’s in a DB2 11 NFM system.

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

2015-06 SOUNDEX and other “cool” features – Part four: DB2 10-DB2 11

 

Update for DB2 10 and all new for DB2 11

 

This newsletter completes my walk through of new Scalar functions that I began a few years ago. I will start today with two new ones that were introduced by APAR PM56631 in May 2012 to DB2 10. PACK (a Scalar Function), and its opposite number UNPACK (a Row Function).

 

New Scalar functions in DB2 10:

Some PACK examples

PACK basically “packs” together a list of columns, or literals, into one variable length binary string in UNICODE format. Here’s some real PACK examples—note that SYDUMMYU is used!

SELECT PACK(CCSID 1208, 'ALICE', DATE('2014-07-22') , DOUBLE(8.6))      
FROM SYSIBM.SYSDUMMYU;                                                  
---------+---------+---------+---------+---------+---------+---------+--
00000301C0018001E004B80005414C494345201407224021333333333333            
DSNE610I NUMBER OF ROWS DISPLAYED IS 1
SELECT PACK(CCSID 1208, 'ROY', BIGINT(123456789) ,                      
TIMESTAMP('2014-07-22-07.43.23.123456'))                                
FROM SYSIBM.SYSDUMMYU;                                                  
---------+---------+---------+---------+---------+---------+---------+--
00000301C001EC018804B80003524F5900000000075BCD15000620140722074323123456
DSNE610I NUMBER OF ROWS DISPLAYED IS 1
SELECT PACK(CCSID 1208,                                                 
TIMESTAMP('2014-07-22-07.55.23.123456'))                                
FROM SYSIBM.SYSDUMMYU;                                                  
---------+---------+---------+---------+---------+---------+---------+--
0000010188000620140722075523123456                                      
DSNE610I NUMBER OF ROWS DISPLAYED IS 1

You can see the format that it creates is actually pretty standard and can have uses in normal processing!

Using the second example above, we can see what is in the output:

The VARBIN string starts with a control flag byte which we can happily ignore

00

Then comes how many elements are in the data

0003

Then comes a set of “number of element” SQLTYPES. If any are odd numbered then the data value is *not* in the string as it is in fact NULL

01C0   Varying Character data 448 in decimal
01EC   BIGINT data 492 in decimal
0188   TIMESTAMP data 392 in decimal

 

The first data element CHAR containing “ROY” looks like

04B8   CCSID of String 1208 in decimal
0003   Length of string 
524F59 Data in UNICODE

 

The second data element BIGINT containing 123456789 looks like

00000000075BCD15      BIGINT data big endian always!

The third data element TIMESTAMP containing ‘2014-07-22-07.43.23.123456’ looks like

0006                  TIMESTAMP precision
20140722074323123456  TIMESTAMP data

UNPACK statements

Now, these VARBIN strings can get very long indeed, but what can you do with all of the data? Well, obviously UNPACK springs to mind… Here’s the above data again, but getting input to an UNPACK statement you have to create a FUNCTION first… something like this:

--#SET TERMINATOR $                                        
CREATE FUNCTION PACKUDF ()                                 
        RETURNS VARBINARY(128)                             
        VERSION V1                                         
        LANGUAGE SQL                                       
        DETERMINISTIC                                      
        NO EXTERNAL ACTION                                 
        CONTAINS SQL                                       
        CALLED ON NULL INPUT                               
        STATIC DISPATCH                                    
        NOT SECURED                                        
        DISABLE DEBUG MODE                                 
        BEGIN                                              
        DECLARE PACKSTR VARBINARY(128);                    
        SET PACKSTR =                                      
               PACK(CCSID 1208                             
                   ,VARCHAR('ROY' , 40)                    
                   ,BIGINT(123456789)                      
                   ,TIMESTAMP('2014-07-22-07.43.23.123456')
                   );                                      
        RETURN PACKSTR;                                    
        END                                                
$   

This just does what the second example did, but buried in a Scalar SQL Function. This can then easily be called by just doing this:

SELECT UNPACK(PACKUDF()).*                                                           
          AS (USER_NAME    VARCHAR(40) CCSID UNICODE                    
             ,LARGE_NUMBER BIGINT                                       
             ,WHEN_USED    TIMESTAMP)                                   
FROM SYSIBM.SYSDUMMY1 ;                                                 
---------+---------+---------+-..--------+---------+---------+---------+---------+---
USER_NAME                      ..            LARGE_NUMBER  WHEN_USED                 
---------+---------+---------+-..--------+---------+---------+---------+---------+---
ROY                                             123456789  2014-07-22-07.43.23.123456
DSNE610I NUMBER OF ROWS DISPLAYED IS 1   
            

I deleted some white space (..) above just to get it all to fit on the page, by the way! Now quite *what* you would do with PACK and UNPACK I’ll leave up to you!!!

 

On now to DB2 11: Aggregate Functions and 11 new scalar functions for new ARRAY data type

Here we got three new Aggregate Functions and 11 new Scalar Functions. Most of these are for the new ARRAY data type that was introduced in DB2 11. First, here’s how you can create ARRAYs, so that you can understand a bit more about the Functions that follow:

CREATE TYPE BOXWELL.ROYARRAY  AS INTEGER  ARRAY ??(??) ;     
CREATE TYPE BOXWELL.ROYARRAY4 AS CHAR(16) ARRAY ??( 32 ??) ; 
CREATE TYPE BOXWELL.ROYARRAY5 AS CHAR(16)                    
   ARRAY ??( VARCHAR(8) CCSID UNICODE FOR MIXED DATA ??) ;

Note that I use trigraphs ( ??( = [ and ??) = ] ), as I *hate* the square brackets! (I would *love* to meet that developer in a dark alley one night…I know, I know, it is like that in the “other” SQL world but it doesn’t mean I have to use them!).

 

Now arrays are pretty cool for passing data back from SQL Scalar Functions, or for Parameters of SQL Scalar Functions and Native SQL Procedures, as it simplifies the handling of lots of same type parameters. This is really good for all the JAVA programmers out there who use array types all the time!

In fact, all of the ARRAY stuff is *only* allowed within the confines of Stored Procedures and SQL PL.

What also arrived in DB2 11 is the ability to “unravel the array” using the new “collection-derived-table” syntax—otherwise known as UNNEST—and here are the two examples from the doc:

 

Example 1: Suppose that PHONENUMBERS is a user-defined array type that is defined as an ordinary array.

RECENT_CALLS is an array variable of the PHONENUMBERS type. RECENT_CALLS contains the following phone numbers:

9055553907
4165554213
4085553678
The following SELECT statement uses UNNEST to retrieve the list of phone numbers from the Array:

SELECT T.ID, T.NUM FROM UNNEST(RECENT_CALLS) WITH ORDINALITY AS T(NUM, ID);

ID NUM
1 9055553907
2 4165554213
3 4085553678

SET PHONELIST[’Home’] = ’4443051234’;
SET PHONELIST[’Work’] = ’4443052345’;
SET PHONELIST[’Cell’] = ’4447893456’;

The following SELECT statement is executed:

SELECT T.ID, T.PHONE FROM UNNEST(PHONELIST) AS T(ID, PHONE);

The result table looks like this, although the order of rows might differ:

ID PHONE
Cell 4447893456
Home 4443051234
Work 4443052345

 

The Array Aggregate Function, ARRAY_AGG, and all of the Scalar Functions:

ARRAY_DELETE,
ARRAY_FIRST,
ARRAY_LAST,
ARRAY_NEXT,
ARRAY_PRIOR,
CARDINALITY,
MAX_CARDINALITY,
and TRIM_ARRAY,
are all available to use but only within the confines of SQL PL.

 

CHAR9 and VARCHAR9

Also new in DB2 11 are CHAR9 and VARCHAR9, which are the good old functions from DB2 9 but now keeping the way they used to work forever—I hope they will not actually be used!

XSLTRANSFORM was also delivered to do XML transformations, but I have never actually tried it. You must note this bit of text:

 

This user-defined function requires IBM SDK for z/OS, Java Technology Edition Version 6.
This user-defined function uses the XSLT support that is provided by the W3C XSL Transformations V1.0 Recommendation.

 

GROUPING & MEDIAN

Then we got two new Aggregate Functions. The first is GROUPING—used in the OLAP extensions for doing automatic sub- and grand totaling, and then MEDIAN—which *only* works if you have the IDAA switched on!

So, lots of interesting stuff here. You may want to go back to the 3 newsletters  from a few years ago (2012) to recap all of these “cool” features. That way you’ll have plenty to read if you get bored on holiday!

 

Feel free to send me your comments and ask questions!

Link back to general newsletter page

2015-04 SQLCODEs of interest

 

Are your DB2 11 SQLCODES up-to-date?

I originally wanted to call this newsletter “SQLCODE101” but not all of my international readers may understand… Anyway, this newsletter is dedicated to that small group of people who, like me, share an interest in SQLCODEs.

The “newest” SQLCODES are not updated in copy book style checking routines

One thing I noticed years ago, is how often the “newest” codes are not updated in copy book style checking routines.

I was at one customer site once where their copy book entry looked like this:

 88 SQLCA-ERROR                VALUE -999 THRU -001.

And I choked on my coffee!
Scary huh?

It actually got worse when I then saw:

88 SQLCA-WARNING             VALUE +101 THRU +999.

Now these are soooo bad it hurts the eyes!
But as they were lurking in copy book code, they were simply never seen…

 

DB2 11, current documentation

As of DB2 11, current documentation shows the actual ranges are -30106 to -7 and +12 to +30100.

So the above COBOL should really look like this:

88 SQLCA-ERROR               VALUE -32000 THRU -001.
88 SQLCA-WARNING             VALUE   +1   THRU +99
+101 THRU +32000.

 

The SQLCODE +100 is special, as it is “no row found” or “end of cursor”.

 

What other SQLCODEs are of general interest then?

Well here’s my list in no particular order:

 

-803 (Duplicate key)

This is sometimes called the “lazy update check”. First do an insert, if it bounces with a -803 make the key unique or change it to an update statement. Now this logic is fine if the majority of the inserts succeed, but if the majority are failing it is probably time to swap the logic around and try an update, then if you get a +100 do an insert instead. This also stops any “negative SQLCODE monitors” from firing off too many false positives!

 

-802 and its younger brother +802 (Numeric exception)

Now do you see that I have two codes here? The +802 means a numeric exception has occurred, but the SQL carries on with -2 set in the indicator variable:

 

 +802 EXCEPTION ERROR

+802 EXCEPTION ERROR exception-type HAS OCCURRED DURING operation-type OPERATION ON data-type DATA, POSITION position-number

Explanation: The exception error exception-type occurred while performing one of the following operations on a field that has a data-type of DECIMAL, FLOAT, SMALLINT, or INTEGER:

 

Whereas the802 is a bit different:

 

 -802 EXCEPTION ERROR

-802 EXCEPTION ERROR exception-type HAS OCCURRED DURING operation-type OPERATION ON data-type DATA, POSITION position-number

Explanation: An exception error has occurred in the processing of an SQL arithmetic function or arithmetic expression. The exception error occurred in one of the following areas:

  • In the SELECT list of an SQL SELECT statement.
  • In the search condition of a SELECT, UPDATE, MERGE, or DELETE statement.
  • In the SET clause of the UPDATE operation.
  • During the evaluation of an aggregate function.

So you must really take good care here!

 

 Very interesting SQLcodes

 

-514 and -518 (Prepared SQL gone)

Now these are *very* interesting and you should have a quick look in your monitor to see how many of these you get. I really hope that all your SQL code has retry logic to rePREPARE the SQL that has gone. I was rather surprised to see prepared statements in current active use getting these messages. But if the data is flushed from the DSC then this is what can happen!

Here’s a little snippet from one of Namik Hrle’s DSC presentations:

 

It gives a little “hint” that the prepared statement can be thrown from the cache at any time, (not just the obvious bad guys like RUNSTATS etc.)

 

-331, +335, +445 and +20141 (Truncation or Character Conversion problem)

Here, only the -331 (CHARACTER CONVERSION CANNOT BE PERFORMED BECAUSE A STRING, POSITION position-number, CANNOT BE CONVERTED FROM source-ccsid TO target-ccsid, REASON reason-code) is actually returning an error code, while all the others – +335 (use of substitute character), +445, & +20141 (Truncation) – carry on regardless. This might not be what is actually desired.

 

+222 (Positioned on a hole)

You have positioned onto a deleted/updated row in a SENSITIVE STATIC cursor – These warnings should just trigger another fetch until either table end or a real row is found.

 

-911 (Timeout or Deadlock)

Now this beauty actually does the ROLLBACK for you, so you must be aware of that when you get this bad guy!

 

 

Do you have any SQLCODEs that you treat specially?

I would love to hear from you if you do!

 

 

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

 

Roy Boxwell

 

2015-01 BIFCIDS – Where’s the BIF?

How will you deal with loop-hole usage in production code?

 

 

The IFCIDs 366 and 376

DB2 provides many and varied IFCIDs. But for today, I’m most interested in the 366 and 376. The 366 is available in DB2 10 and the 376 in DB2 11. Now I like to call these “BIFCIDs” because they are triggered whenever a BIF is used that will behave differently than it is currently used when moving to the next release of DB2. (It’s also triggered when changing Application Compatibility settings in DB2 11 and higher).

 

So where’s the BIF?

BIF Usage Video (11min:)       Presentation

Well, a BIF is a Built-In Function such as CHAR, DECIMAL, etc. There are hundreds of them these days. In the last few DB2 releases, IBM has changed a few to make DB2 more compatible with SQL standards. They have actually closed a couple of loop-holes, where “bad” data could be accepted and processed.

 

Loop-hole user?

What happens is: someone somewhere found this loop-hole and used it in production code. Now when you upgrade your DB2, this code will either fail or give erroneous results – which is never good. Hence IBM created the IFCID 366. This is output every time an SQL statement is PREPARED, or executed, that contains a candidate BIF. There were so many of these, that IBM introduced a sort of condensed version so it only triggered one for the first execution, or prepare, but sadly that IFCID—376—is only for DB2 11.

Where can, or will, this really hurt?

 

Looking into the documentation for these IFCIDs you will see a long list of when they are written:

***********************************************************************
**  IFCID 0366 is a serviceability trace.                            **
**  It can be used to identify applications that are affected        **
**  by incompatible changes.                                         **
**  The QW0366FN field indicates the type of incompatible Change:    **
**                                                                   **                                                      
**  QW0366FN = 1                                                     **
**  Indicates that the pre Version 10 CHAR built-in function has     **
**  been invoked. There is an incompatible change to the output of   **
**  the CHAR function for some decimal data. The zparm               **
**  BIF_COMPATIBILITY and/or the SYSCOMPAT_V9 schema have been used  **
**  by this application to get the old behavior. Please make the     **
**  appropriate changes and rebind with the SYSCURRENT schema to     **
**  use the Version 10 CHAR(decimal) built-in function.              **
**  (PM29124 V10 only, usermod V8/V9)                                **
**                                                                   **
**  QW0366FN = 2                                                     **
**  Indicates that the pre Version 10 VARCHAR built-in function or   **
**  CAST(decimal AS CHAR or VARCHAR) has been invoked.               **
**                                                                   **
**  QW0366FN = 3                                                     ** 
**  Indicates that an unsupported character representation of a      ** 
**  timestamp string was used. PM48741 V10 only.                     **
**                                                                   ** 
**  QW0366FN = 4                                                     ** 
**  A QW0366FN 4 record indicates that the statement uses the        **     
**  word ARRAY_EXISTS as an unqualified user-defined function Name   **   
**  in a context that may be incompatible with Version 11.           **     
**                                                                   **
**  QW0366FN = 5                                                     ** 
**  A QW0366FN 5 record indicates that the statement uses the        **
**  word CUBE as an unqualified user-defined function Name           **
**  in a context that may be incompatible with Version 11.           **
**                                                                   **
**  QW0366FN = 6                                                     **
**  A QW0366FN 6 record indicates that the statement uses the        **
**  word ROLLUP as an unqualified user-defined function Name         **
**  in a context that may be incompatible with Version 11.           **
**                                                                   ** 
**  QW0366FN = 7                                                     **
**  A QW0366FN 7 record indicates that DB2 for z/OS server issued    **
**  a SQLCODE -301 for incompatible data type conversion from        **
**  string data type (e.g. CHAR, VARCHAR, GRAPHIC, VARGRAPHIC        **
**  etc.) to numeric data type in V10 CM mode when implicit          **
**  cast is not supported or V10 NFM mode when DDF_COMPATIBILITY     **
**  zparm is set to DISABLE_IMPCAST_NJV or SP_PARMS_NJV to           **
**  disable implicit cast, and the client is CLI Driver              **
**  or v11 NFM mode & APPLCOMPAT = V10R1 when DDF_COMPATIBILITY      **
**  is set to SP_PARMS_NJV or DISABLE_IMPCAST_NJV to disable         **
**  implicit cast either from string data type to numeric or         ** 
**  from numeric data type to string data type.                      **
**                                                                   **
**  QW0366FN = 8                                                     **
**  A QW0366FN 8 record indicates that DB2 for z/OS server           **
**  returned output data match the data types of the                 **
**  corresponding CALL statement arguments when DDF_COMPATIBILITY    **
**  zparm is set to SP_PARMS_NJV.                                    **
**                                                                   **
**  QW0366FN = 9                                                     **
**  A QW0366FN 9 record indicates a data type conversion from        **
**  a TIMESTAMP WITH TIME ZONE input to a TIMESTAMP data             **
**  during input host variable bind-in process on server when        **
**  DDF_COMPATIBILITY zparm is set to IGNORE_TZ to ignore the        **
**  time zone information sent by Java IBM Data Server Driver.       **
**                                                                   **
**  QW0366FN = 10                                                    ** 
**  RTRIM, LTRIM or STRIP version 9 being used with mixed data       **
**                                                                   **
**  QW0366FN = 1101                                                  ** 
**  Indicates that the INSERT statement that inserts into an XML     **
**  column without XMLDOCUMENT function has been processed (which    **
**  should result in SQLCODE -20345 when run on DB2 release prior    **
**  to V11). Starting with V11, SQL error will no longer be issued.  **
**  Application will no longer recieve SQLCODE for this Statement.   **
**                                                                   ** 
**  QW0366FN = 1102                                                  **
**  Indicates that V10 XPath evaluation behavior was in effect which **
**  resulted in an error. For instance, a data type conversion error **
**  could have occured for a predicate that would otherwise be       **
**  evaluated to false. Starting from V11, such "irrelevant" Errors  **
**  might be suppressed so an application might no longer recieve    **
**  the SQLCODE for this Statement.                                  **
**                                                                   **
**  QW0366FN = 1103                                                  **
**  Indicates that a dynamic SQL uses the ASUTime limit that has     **
**  been set for the entire thread for RLF reactive governing.       **
**  For instance, when a dynamic SQL is processed from package A,    **
** if the ASUTime limit is already set during other dynamic SQL      ** 
** processing from package B in the same thread, the SQL from        **
** package A will use the ASUTime limit set during the SQL           **
** processing from package B. Stating with v11, dynamic SQLs from    **
** multiple packages will use the ASUTime limit that is set          **
** considering its own package information.                          **
**                                                                   **
** QW0366FN = 1104, 1105, 1106, 1107                                 **
** Indicates that CLIENT special register (CLIENT_USERID,            **
** CLIENT_WRKSTNNAME, CLIENT_APPLNAME, CLIENT_ACCTNG) has been set   **
** to a value that is longer than what is supported prior to V11.    **
** A shorter value has been used instead.                            **
**                                                                   **
** QW0366FN = 1108                                                   **
** Indicates that CLIENT special register (CLIENT_USERID,            **
** CLIENT_WRKSTNNAME, CLIENT_APPLNAME, CLIENT_ACCTNG) has been set   **
** to a value that is longer than what is supported prior to V11.    **
** Truncated values upto the supported lengths prior to v11 have     **
** been used for RLF table search instead.                           **
**                                                                   **
** QW0366FN = 1109                                                   **
** Indicates that CAST(string AS TIMESTAMP) was processed for the    **
** input string of length 8 and input was treated as a store clock   **
** value (or input string was of length 13 and was treated as a      **
** GENERATE_UNIQUE value). This behavior is incorrect for a CAST     **
** and is valid for TIMESTAMP built-in function only. This behavior  **
** is being corrected in DB2 11 so that input to CAST is not         **
** treated as a store clock value nor GENERATE_UNIQUE.               **
**                                                                   **
** QW0366FN = 1110                                                   **
** Indicates the integer argument of SPACE function is greater       **
** than 32764.                                                       **
**                                                                   **
** QW0366FN = 1111                                                   **
** Indicates the optional integer argument of VARCHAR function       **
** has a value greater than 32764. *                                 **
***********************************************************************

 

Useful stuff indeed!

Phew! Not a bad list, huh? Now you see why these IFCIDs are so useful. It could well be, that you have none of these “alive” in your system today. Or, of course, it could be that you get millions of the things! Somehow you will have to work out a way to save the data, analyse it to get to the root cause, and then, finally, fix the problem(s).

 

Saved by APPLCOMPAT?

You could argue that the new DB2 11 parameter Application Compatibility will save you, but this is really a false economy. All it enables is the guarantee that the code will still “run”. However, in two more DB2 releases the code will fail and, in two more releases – so about six years – who will even know *how* to change which piece of source code and, perhaps even, where is that source code?

 

Saved by BIFCIDs

Personally, what I would do, is : to run our SQL WorkloadExpert tool to trap all the required [B]IFCIDs for a few hours (at first!).Then I would analyse the results, fix the code where it needs fixing – and repeat! I would keep doing this until no IFCID records are coming out and I would be set!

BIF Usage Video (11min:)       Presentation

What is even better, is that our SQL WorkloadExpert will work correctly even when any new QW0366FN values appear – so when IBM decides to add another code (Like the new values 9 and 10 above for example) this BIF Usage still works correctly.

 

Of course, you may have another tool that you use at your site.

Can it see “Where’s the BIF?”

How will you deal with loop-hole usage in production code?

 

As usual, any question or comments gladly welcome!

 

TTFN

Roy Boxwell