2025-07 zIIPing along!

This month I wish to spend a bit of time delving into the zIIP processor(s) that, hopefully, you all have available to use “for free”. Naturally, they cost a few thousand bucks a piece, but their usage is not in the rolling four-hour average and so basically free. Of course, if you have a different price model where CPU usage is all-inclusive then the use of these handy little beasts is moot!

What Is It?

They were first introduced in 2006 with the IBM System z9 processor complex. Their full name is System z Integrated Information Processor normally shortened to “zIIP”. They followed on and took over from the earlier zAAP, that was used for Java, and the IFL, which was for Linux and z/VM. Originally, they were just for Db2 workloads but nowadays quite a lot of non-Db2 work is zIIP eligible.

Eligible?

Yep, the wording is important! The fact that some function or code etc. is able to be run on a zIIP does not mean it *will* run on a zIIP. They are, after all, processors and when they are all busy, your workload will just trundle on by using the rest of the normal CPs (Central Processors) you have.

How Many?

It started out nice and easy… You could not have more zIIPs than you have CPs in your plex. So a 1:1 ratio. Then along came the zEC12 and it changed the ratio to be not more than 2:1 Nowadays, with the z16, IBM have thrown in the towel and announced there is no limit anymore!

When They Arrived

The first Db2 to exploit the zIIP was the “big change” version DB2 V8 when everything went UNICODE and long column on us all!

What have They Done for Us?

From the get go, any TCP/IP based remote accessing SQL was eligible for offload to zIIP. This was a very very good thing indeed and saved people mega-bucks. Parallel query child processes under a dependent enclave SRB, or independent enclave SRB if coming from TCP/IP, also got zIIP support and some utility processes, (Index build for LOAD, REORG and REBUILD INDEX, a portion of index build under a dependent enclave SRB and also a portion of sorting).

Sorry, SRB What?

You might have noticed a TLA (Three Letter Abbreviation) “SRB” occurring a lot in that text! So, what is an SRB and why is it so important? On mainframes, all work is run under two kinds of control blocks: Task and service request blocks. Normally user programs, and system programs, use a Task Control Block, (the TCB that we all know and love) and all run on normal CPs not zIIPs! The Service Request Block (SRB) however, is for system service routines. They are initiated by a TCB to do special stuff and to start them it is called “scheduling an SRB”. To do this, your program must be running in a higher authorized state called “supervisor state”. SRBs run parallel to the TCB task that scheduled them and they cannot own storage but can use the storage of the TCB. Only these SRBs are eligible to be offloaded to a zIIP.

And Then?

Well, when IBM brought out the z13 they merged the zAAP support onto the zIIP and since then, the general direction has been: If a task is an SRB then it *can* be made zIIP eligible. This has meant that there has been a gradual increase in Vendor take-on and IBM usage for these “helping hands”.

What about Db2 Usage?

In DB2 V9 they announced the actual, up until now hidden, limits of use. For TCP/IP remote SQL 60% offload, for Parallel queries 80% offload, Utilities up to 100% offload and, brand new, XML also up to 100% offload!

DB2 10

RUNSTATS were added, but *not* the distributed statistics and inline statistics parts, and Db2 buffer pools got 100% offload for prefetch and deferred write processing.

Db2 11

Not only did the B go lower case, but RUNSTATS got column group distribution statistic processing, and System Agent processing got up to 100% offload when running under enclave SRBs but not p-lock negotiation. This included page set castout, log read, log write, pseudo index-delete and XML multi version document cleanout.

It was also here, when they created the zIIP “needs help” function when a delay occurs. This is controlled by the z/OS parameter IIPHONORPRIORITY YES/NO setting. YES is the default and tells a stalled zIIP to shunt the work to a CP. That might, or might not, be a good idea depending on your cost or time SLAs.

Db2 12

This brought RESTful support at 60% offload, Parallel went up to 100% offload and RUNSTATS also went to 100% offload.

Db2 13

All Db2 SQL AI functions went straight to 100% offload and the COPY utility got a 100% offload but only in the COPYR phase.

COPY now in Db2 13 FL507

I recently kicked my little test Db2 up to Db2 13 FL507 and then waited a couple of days to see the zIIP usage that COPY just got. We were informed it was just in the COPYR subphase of the Utility. I use the SYSUTILITIES table to track everything, so I wrote a little SQL that lists out all the Utilities, Counts, CPU, zIIP and Elapsed.

Here’s the SQL splitting with/without zIIP usage:

SELECT SUBSTR(NAME , 1 , 18)                              AS UTILITY
      ,COUNT(*)                                           AS COUNT 
      ,(SUM(CPUTIME)     * 1.000) / 1000000               AS CPU   
      ,(SUM(ZIIPTIME)    * 1.000) / 1000000               AS ZIIP  
      ,((SUM(CPUTIME) + SUM(ZIIPTIME)) * 1.000) / 1000000 AS TOTAL 
      ,(SUM(ELAPSEDTIME) * 1.000) / 1000000               AS ELAPSED
FROM SYSIBM.SYSUTILITIES                                           
WHERE ZIIPTIME > 0                                                 
GROUP BY NAME                                                      
UNION ALL                                                          
SELECT SUBSTR(NAME , 1 , 18)                              AS UTILITY
      ,COUNT(*)                                           AS COUNT 
      ,(SUM(CPUTIME)     * 1.000) / 1000000               AS CPU   
      ,(SUM(ZIIPTIME)    * 1.000) / 1000000               AS ZIIP  
      ,((SUM(CPUTIME) + SUM(ZIIPTIME)) * 1.000) / 1000000 AS TOTAL 
      ,(SUM(ELAPSEDTIME) * 1.000) / 1000000               AS ELAPSED
FROM SYSIBM.SYSUTILITIES                                            
WHERE ZIIPTIME = 0                                                 
GROUP BY NAME                                                      
ORDER BY 1 , 2                                                     
FOR FETCH ONLY                                                      
WITH UR                                                            
;                                                                  

Here is my output:

---------+------------+----------+--------+----------+-----------+-
UTILITY            COUNT        CPU     ZIIP      TOTAL     ELAPSED
---------+------------+----------+--------+----------+-----------+-
CATMAINT               5       .282     .000       .282       3.477
COPY                 925     11.673    4.907     16.581     914.838
COPY               60471   1017.939     .000   1017.939   65126.853
LOAD                   2       .005     .000       .005        .012
LOAD                 802     17.453    3.852     21.306    1990.150
MODIFY RECOVERY    59128    391.163     .000    391.163   15461.098
MODIFY STATISTICS     47       .120     .000       .120       1.276
QUIESCE               10       .015     .000       .015        .156
REBUILD INDEX          3       .027     .000       .027        .797
REBUILD INDEX          9       .082     .002       .085       2.502
RECOVER                9       .047     .000       .047       1.009
REORG                  4       .022     .000       .022       1.942
REORG                 28      2.075     .427      2.503     178.284
REPORT RECOVERY        3       .059     .000       .059        .454
RUNSTATS              33       .096     .000       .096       4.695
RUNSTATS            3575     44.477   92.323    136.801    1182.851
UNLOAD              1688    129.379     .000    129.379     989.501
DSNE610I NUMBER OF ROWS DISPLAYED IS 17

Here, you can see which utilities are zIIP enabled and how much the zIIPs saves us. The new kid on the block, COPY, actually saves us about 30% which is *not* to be sneezed at!

Checking in Batch

I add the “hidden” parameter STATSLVL(SUBPROCESS) to all my Utilities so that it outputs more info as I am a nerd and love more data! The numbers never all add up and so you must be a little careful, but here’s an example Image Copy JCL with output showing the counters and details:

//ICU005   EXEC PGM=DSNUTILB,REGION=32M,                
//     PARM=(DD10,'DD1DBCO0ICU005',,STATSLVL(SUBPROCESS))
//STEPLIB  DD DISP=SHR,DSN=DSND1A.SDSNEXIT.DD10         
//         DD DISP=SHR,DSN=DSND1A.SDSNLOAD              
//DSSPRINT DD SYSOUT=*                                  
//* THRESHOLD REQUEST DB2CAT REQUEST                     
//SYSIN    DD *                                         
 COPY TABLESPACE DSNDB01.SYSSPUXA                       
    COPYDDN (SYSC1001)                                  
    FULL YES SHRLEVEL CHANGE                            
//SYSC1001 DD DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,       
//            SPACE=(1,(352,352),RLSE),AVGREC=M,        
//            DSN=COPY.DD10.DSNDB01.SYSSPUXA.P0000.D25195
//SYSPRINT DD SYSOUT=*

Normal output:

DSNU000I    195 07:45:39.89 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DD1DBCO0ICU005                          
DSNU1044I   195 07:45:39.90 DSNUGTIS - PROCESSING SYSIN AS EBCDIC          
DSNU050I    195 07:45:39.90 DSNUGUTC -  COPY TABLESPACE DSNDB01.SYSSPUXA COPYDDN(SYSC1001) FULL YES SHRLEVEL CHANGE
DSNU3031I -DD10 195 07:45:39.91 DSNUHUTL - UTILITY HISTORY COLLECTION IS ACTIVE. 
                      LEVEL: OBJECT, EVENTID: 238604 
DSNU3033I -DD10 195 07:45:39.92 DSNUHOBJ - SYSIBM.SYSOBJEVENTS ROWS INSERTED FOR OBJECT-LEVEL HISTORY             
DSNU400I    195 07:46:23.09 DSNUBBID - COPY PROCESSED FOR TABLESPACE DSNDB01.SYSSPUXA
                      NUMBER OF PAGES=222109
                      AVERAGE PERCENT FREE SPACE PER PAGE =  2.75
                      PERCENT OF CHANGED PAGES =  0.00
                      ELAPSED TIME=00:00:43   
DSNU428I    195 07:46:23.09 DSNUBBID - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DSNDB01.SYSSPUXA

Then the extra stuff, sorry about the formatting but WordPress is not good for batch output:

----------------------------------------------------------------------------------------------------------
                                          U T I L I T Y   S T A T I S T I C S                             
                                                                                                          
INTERVAL = UTILITY HISTORY        CPU (SEC)  = 0.000288          ZIIP = 0.000000              
   LEVEL = UTILINIT SUBPROCESS    ELAPSED TIME (SEC) = 0.000                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 6            2                                                                      
  BP32K               2            1                                                                      
                                                                                                          
  TOTAL               8            3                                                                      
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = UTILITY HISTORY        CPU (SEC)  = 0.000091          ZIIP = 0.000000              
   LEVEL = UTILINIT SUBPROCESS    ELAPSED TIME (SEC) = 0.000                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 3                                                                                   
  BP32K               1            1                                                                      
                                                                                                          
  TOTAL               4            1                                                                      
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = OBJECT-LEVEL HISTORY   CPU (SEC)  = 0.000147          ZIIP = 0.000000              
   LEVEL = UTILINIT SUBPROCESS    ELAPSED TIME (SEC) = 0.001                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 6            2             1                                                        
  BP32K               2            1                                                                      
                                                                                                          
  TOTAL               8            3             1                                                        
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = UTILINIT               CPU (SEC)  = 0.002101          ZIIP = 0.000000              
   LEVEL = PHASE                  ELAPSED TIME (SEC) = 0.021                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                48            6             2                                                        
  BP32K               9            5                                                                     1
  BP32K               5            3                                                                      
  TOTAL              62           14             2                                                       1
                                                                                                          
  DDNAME        DS OPEN     DS CLOSE      READ I/O     WRITE I/O    I/O CHECKS      I/O WAIT    END OF VOL
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  SYSPRINT                                                     3             3                            
                                                                                                          
  TOTAL                                                        3             3                            
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPYRDN0001      "     CPU (SEC)  = 0.008764          ZIIP = 0.273090              
   LEVEL = SUBPHASE               ELAPSED TIME (SEC) = 43.033                                             
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0            223486            3          2713             1          4982                            
  BP32K              73           50                                                                     1
  BP32K               2            2                                                                      
                                                                                                          
  TOTAL          223561           55          2713             1          4982                           1
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPYWDN0001      "     CPU (SEC)  = 0.357434          ZIIP = 0.000000              
   LEVEL = SUBPHASE               ELAPSED TIME (SEC) = 43.032                                             
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = Pipe Statistics                                                                                
    TYPE = COPY Data Pipe000                                                                              
                                                                                                          
  Records in:                                     222,110                                                 
  Records out:                                    222,110                                                 
  Waits on full pipe:                                 360                                                 
  Waits on empty pipe:                                  0                                                 
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPY                   CPU (SEC)  = 0.004909          ZIIP = 0.000000              
   LEVEL = PHASE                  ELAPSED TIME (SEC) = 43.167                                             
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                22            7             1                                                        
  BP32K               9            8                                                                      
                                                                                                          
  TOTAL              31           15             1                                                        
                                                                                                          
  DDNAME        DS OPEN     DS CLOSE      READ I/O     WRITE I/O    I/O CHECKS      I/O WAIT    END OF VOL
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  SYSPRINT                                                     4             4                            
  SYSC1001            1            1                       37022         37022        38.793             1
                                                                                                          
  TOTAL               1            1                       37026         37026        38.793             1
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = UTILTERM               CPU (SEC)  = 0.000150          ZIIP = 0.000000              
   LEVEL = PHASE                  ELAPSED TIME (SEC) = 0.002                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 2                                                                                   
  BP32K               2            1                                                                      
                                                                                                          
  TOTAL               4            1                                                                      
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPY                   CPU (SEC)  = 0.373401          ZIIP = 0.273090              
   LEVEL = UTILITY                ELAPSED TIME (SEC) = 43.191                                             
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                72           13             3                                                        
  BP32K              20           14                                                                     1
  BP32K               5            3                                                                      
                                                                                                          
  TOTAL              97           30             3                                                       1
                                                                                                          
  DDNAME        DS OPEN     DS CLOSE      READ I/O     WRITE I/O    I/O CHECKS      I/O WAIT    END OF VOL
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  SYSPRINT                                                     7             7                            
  SYSC1001            1            1                       37022         37022        38.793             1
                                                                                                          
  TOTAL               1            1                       37029         37029        38.793             1
                                                                                                          
----------------------------------------------------------------------------------------------------------

Here, you can easily see that the only phase with any zIIP values is the INTERVAL = COPYRDN0001 which is obviously the COPYR IBM mentioned, so that is 100% correct!

Bells and Whistles?

Here’s a complete list of all the z/OS parameters of interest in the IEAOPTxx dataset that are to do with zIIPs:

  1. IIPHONORPRIORITY=xxx – Default is YES.
  2. PROJECTCPU=xxx – Default is NO. Whether to report possible zIIP / zAAP offload data.
  3. HIPERDISPATCH=xxx – Default is YES.
  4. MT_ZIIP_MODE=n – Default is 1. This is the multithreading flag, changing it to 2 enables zIIP multithreading and, sort of, doubles your number of zIIPs.
  5. CCCAWMT=nnnn – Default is 3200 (3.2 ms). This is the time z/OS waits before waking up idle CPs or zIIPs.
  6. ZIIPAWMT=nnnn – Default is 3200 (3.2ms). This is the time to wait before a busy zIIP asks for help (IIPHONORPRIORITY=YES)

All of these parameters, with the possible exception of MT_ZIIP_MODE, should normally be left at their default values unless you really know what you are doing and what is running on your system! The multithreading parameter is naturally only supported on systems where the zIIP can go multithreading.

Anything Else?

Security encryption can run on zIIP and, as everything is encrypted these days, it can be a very good idea. XML usage in COBOL, PL/I, Java, CICS, IMS etc., and Machine learning with ONNX. Python AI and ML workloads are 70% eligible. System Recovery Boost z15 and above and for SVC dumps z16 and above. The Java workload within z/OSMF can go up to 95% offload according to IBM internal tests. With the ratio of zIIP to CP now gone the sky is basically the limit!

Saving the Day?

What are zIIPs saving your firm? I would love to get a screen shot of the output of that SQL!

TTFN

Roy Boxwell

2025-06 IDUG NA 2025

This month I wish to run through the IDUG NA 2025, not every presentation but the ones I attended or held. The IDUG was a very good one indeed, I thought! My colleague Ulf and I got off to a bad start when we missed our flight connection in Amsterdam due to bad weather but KLM got us on a flight to Boston, so we arrived in Atlanta only six hours later than planned… Ho hum! Such are the banes of modern life!

Db2 for z or Db2 for LUW?

You might well have heard that the IDUG have moved away from a pure z/OS or LUW style “tracks” system and now do a more named approach which *still* catches people out, as they assume that everything in the second column is just for LUW – Not true dear readers! Anyways, it *always* pays to read through the whole grid before planning your 10,000 steps per day timetable!

Keynote

The opening keynote “Leveraging your Db2 Data for Enterprise AI” from two IBM VPs: Minaz Merali and Priya Srinivasan, was a very good one and well attended, we just managed to finish getting our booth ready in time, as it was “news to us” that the Expo was also where the Keynotes were going to be held all week!

It starts!

The technical sessions then kicked off with a “Spotlight” session from Haakon Roberts doing his excellent “Trends and Directions” as a double header with Akiko Hoshikawa as well. It was listed as session A1 in the grid but then IDUG said it was S1 – which caused some confusion when filling in the reviews! Anyways, I really enjoyed it, especially the interaction with the audience, as they all suddenly realized that in a few short years several, or maybe lots, of their objects will be unsupported… For you, dear readers, just download and run my good old Migration HealthCheck for Db2 z/OS to see how many of the evil beasts, that IBM are now officially killing off, you still have lounging around in production! We all have time, lots of time, to “fix” these problems – Don’t panic! Even IBM must do some work to finally get rid of Simple, Multi-table and non-UTS tablespaces in the Directory and Catalog!!! But start planning and checking now… forewarned is forearmed!

Performance Review [access @ IDUG]*

Then came A2 from Akiko Hoshikawa with the “Key Performance Updates” session – again an excellent session, with the great tip around DSMAX : you should be extremely careful about having a very large number of open datasets, especially indexes, as the Root Page is always pinned in the buffer pool! So, if you have 100,000 open indexes you can imagine how bad your buffer pool(s) will look like! Secondary problem is actually the time it takes to close all these datasets at Db2 shut down… Db2 does not actually care and passed the buck to z/OS to do it all!

RUNSTATS & Monitoring [access @ IDUG]*

Then I held my first session: C3 “RUNSTATS Master – reloaded ” if you want to learn waaaay more than you should about RUNSTATS feel free to also download and run our Statistics HealthCheck for Db2 z/OS. Then I popped over to see the Tom Glaser session E5 “Don’t have an SQL monitor? You might need a bigger shovel” where he drilled down into the nuts-and-bolts of what you must/should monitor and showed which metrics are useful for tuning your systems.

Keynote – Treasure

Tuesday began with another good key note session from Greg Lotko, a Senior VP at Broadcom. All about Pathfinders and finding treasure – extremely entertaining, I thought!

Utilities Review [access @ IDUG]*

Later the sessions started and I joined Ka Chun Ng for his “Db2 for z/OS Utilities” session as moderator… Here, some technical problems raised their ugly heads, and we had bad audio/visual issues which delayed the start by about ten minutes. This meant Ka Chun could not finish his presentation. This was a real shame, as it is crammed full of great stuff for us Utility nerds out there! He even updated me about an error I had in my RUNSTATS presentation – fantastic! Top things here, were the zIIP offload for COPY – Not much CPU is actually offloaded but how *many* image copies do you run every day?? REGION=0M is the best for utils, but we can never use that, can we? He pointed out that utilities are capped to 1.6GB – now you can allocate a correct REGION size without breaking your firm’s internal standards. Slide 19 was a useful reference for large REORGs, that we must all do at some point, to finally get to PBR RPN tablespaces. He also mentioned one of my favorite bug-bears as well -> REORG SYSLGRNX regularly with MODIFY RECOVERY – This shrinks its size dramatically and really improves over-all system performance in a major knock-on effect! Loads of people either do not know this or just do not bother!

Hack Attack? [access @ IDUG]*

A quick stroll back through the rabbit warren of rooms and corridors then brought me to F7: “How to Hack Db2 for z/OS” by Emil Kotrc – Have no fear friends, there is nothing here that will let hackers into your system like a zero-day style hack, but it is a full list of possible vectors that should be a) known about and b) discussed. Biggest take aways -> Check your access permissions to APF Authorized load libraries and sanitize your dynamic SQL input!

Top Ten Lists [access @ IDUG]*

After lunch I moderated Craig Mullins’ D8: “My All-Time Db2 Top Ten lists” which won the best user presentation award! Full of great info and great fun to see/hear. You always learn stuff at Craig’s presentations!

Security! [access @ IDUG]*

Next up was F9 from Gaya Chandran: “Modernizing your Security posture around Db2 z/OS data” which rang bells with me in my Auditor role. Slide nine was the biggest winner for me… And then she reviewed all the new/old/nice security things we have on Db2 for z/OS that must simply be reviewed and/or used… It could make your world much better!

Keynote – Go take a Hike!

Wednesday started with another great keynote from Jennifer Pharr Davis. I had a chat with her at our booth before she started and she was really interested in Db2 and the whole ecosystem. She is, what I would call, an extreme hiker! She has walked the Appalachian Trail (Spans 14 States and nearly 2,200 Miles/3,500 km) three times. Madness, I would say, but from these experiences she learned a lot about resilience and adaptability!

Profile Tables! [access @ IDUG]*

Then into session D10 with Scott Walker and Gaya Chandran: “Db2 z/OS 13 – using Profiles to monitor/block unsecure TCP/IP connectivity” we learned how Scott set up, ran, monitored, and updated all his Db2 systems to go from insecure to secure TCP/IP port usage in a well-planned and documented style. This highlighted the usefulness of PROFILE tables and proves again that this is one of the best features ever in Db2 for z/OS, in my opinion!

DORA & PCI DSS [access @ IDUG]*

Then I was up again with D11: “Isn’t she aDORAble?” all about Audit, Compliance, Resilience and how much we need to change into “internal auditors” before a “lead overseer” comes along and makes your life horrible! Feel free to use this presentation at your site to hammer home how much it will cost if you do not start doing stuff now! Due diligence… Try out our freeware SecurityAudit Health Check for Db2 z/OS.

Time to go!

That was it for me – Had to leave in the middle of lunch to get my flight back home! One last word must be mentioned about the food – Fantastic! The lunch and coffee break beverages and food were simply great! My personal favorite was when I was at the dessert table and saw “Mexican Chocolate Cheesecake” and wondered out loud to a random guy next to me “I didn’t know that Mexicans made chocolate cheesecake” he replied “The funny thing is nor did I – and I am Mexican!” Made my day!

and finally…

My colleague Ulf also got to hold a session: B15 “Understand, Manage and Love Certificates in z/OS and USS” [access @ IDUG]* on Thursday which was all about the “brave new” world of certificates and key-rings and how you cannot afford to ignore them anymore! All went down splendidly!

TTFN,

Roy Boxwell

If you attended, or it is two/three years later 🙂 , you can access all of the presentations here [access @ IDUG]*.

At the time of writing the A1/S1 presentation from Haakon is sadly not available…

Note: * To get the links to work you must be registered at IDUG *and* entitled to the content. Otherwise you will get an “Oops!” screen popping up tell you that you lack authorization! If that is the case you can then think about buying the “Premium” version of IDUG membership which does allow downloading of conference files without having to wait around two years.

2025-05 Central Canada Db2 Users Group (CCDUG) 2025 review

This month I wish to give a quick review and roundup of the Central Canada Db2 Users Group (CCDUG) meeting 2025. It was held in a brand-new location this year, at the IBM Toronto development labs – We were all wondering if it could be anywhere near as good as the “old” BMO Institute for Learning. Spoiler alert: it was!

Who are those two reprobates sat there??? 🙂

Tracks Tracks Tracks plus a New Day!

This year, the CCDUG had a free bonus “first” day dedicated to IMS: “Intro to IMS Day”. This was a deep dive into everything HDAM’my, PSB’y and DBD’y in the world of the one true hierarchical database!

Back to Normal?

The keynote, from Greg Lotko, started, and I must admit I was impressed! I go to so many conferences, IDUGs and Seminars etc. that keynotes tend to let me down… Not this time: It was an interesting, engaging and, quite frankly, the best keynote I have seen!

Now Back to Normal!

The two-day conference was split into four separate tracks:

  • Db2 for z/OS (I spent most of my time here, of course!)
  • Db2 LUW
  • IMS
  • AppDev which was cross-platform and database!

Here’s a link to the presentation grid download page overview where you can check out abstracts etc.

The actual presentations are also available here, so you can grab copies for your good selves,

The Conference

There were a few last-minute grid changes due to cancelled flights etc. So, what you see on the grid is not actually 100% what was presented, but it was really just swapping presentations and or presenters! I apologize now if I have missed anyone out that thinks they should be included but even I make mistakes these days!!!

Personal Faves

ZOS-01 Db2 13 for z/OS Latest Features with Mark and Tori rattling through all the new and cool stuff in Db213. I really like Mark Rader, Tori Felt and Anna McKee – They have a great synergy together and know a ton of stuff. So, anything presented by them is simply a must have!

ZOS-02 Universal Tablespace update as of V13 with David Simpson was very good indeed including tips and tricks about finally getting to an all-UTS world which is important in ZOS-06!

ZOS-04 Audit your Db2 for z/OS – Isn‘t she aDORAble!! by yours truly, Roy Boxwell: A breathtaking run through the world of legal double-speak and corporate hell. Voted by one attendee as the scariest thing he has seen is his life and by another as simply “the horror, the horror”…

ZOS-05 Db2 for z/OS Health Check Topics with Mark and Tori, this time going through the results of the IBM 360-degree health checks that they do at customer sites and reporting on common problems that they repeatedly find.

ZOS-06 The Db2 for z/OS Catalog and Directory The Past, Present, and Future. John Lyle talking about the Db2 Catalog and Directory including the fact that migration to Db2 Vnext will *not* be possible if you have any non-UTS tablespaces (ignore LOB and XML of course!). I asked about DEFINE NO and he confirmed that a DEFINE NO non UTS will stop the migration. I then opened an Aha Idea as I think this could cause grief in a lot of sites… Please go to Aha Ideas and vote!

DB24ZOS-I-1752   Support REORG for DEFINE NO

I have heard, in various IBM presentations (Haakon at IDUG Valencia, John Lyle at CCDUG) talk about Db2 Vnext, the statement “no migration to Vnext if you have 6 byte RBA/LRSN or non-UTS tablespaces”. This if fine unless you happen to have old DEFINE NO objects. You cannot REORG these to action the ALTER command as reorg is disallowed on DEFINE NO. We cannot simply INSERT a row into every DEFINE NO as this creates possibly 100’s of objects which, by definition, are not really in use (Yes, they can be SELECTed from !!). Reverse engineer the DDL, DROP and reCREATE, reGRANT and BIND is just a crazy amount of effort.

My idea is simply to allow REORG on DEFINE NO objects which would just set the correct bits in the catalog so that when it would be created it would then create a UTS space with MAXPARTITIONS 1 DSSIZE 64GB which is 100% ok.

ZOS-07 Db2 for z/OS and Db2 Administration Foundation: An Installation Experience, as well as the longest title of the conference(!) was a real-world report of how to get this tool installed and up and working with Zowe. We all know we must Zowe right? Remember that Data Studio has officially been declared dead and is now buried as well!

AppDev-03 Db2 for z/OS 101: This or That? Tools to interact with Db2 for z/OS! This was Anna and Mark doing a double-header of lists and lists of different tooling for different business cases and whether they are free – pardon me “non charge items” – or not!

AppDev-06 Exploit Certificates and eliminate tiresome password pains in z/OS and USS. From Ulf Heinrich shows you how to go painlessly into the brave new world of certificates on the mainframe. Required reading, especially when going to Zowe!

Fill in Your Form, Please!

As at every conference you had to submit your Conference review form at the end where I happily plugged for a COBOL special day next year too and, perhaps, a bit less garlic in the chicken pasta!

That’s all, Folks!

We then had to leave early to catch our flight back – which then got a delayed take-off and, due to huge queues at passport control *in Germany*, we then missed our connecting flight to Düsseldorf… The nice ladies at our check-in desk then gave us both a €7 voucher and we got some train tickets for the high-speed ICE Sprinter instead. Flying along at 300kmh while at ground level is quite impressive!

Hope to see some of you at the IDUG NA 2025 in Atlanta, from Sunday 8th June until Thursday the 12th! If you cannot make it to Atlanta don’t worry – feel free to pop on over to Düsseldorf for the IDUG EMEA 2025 from Sunday 26th October until Thursday the 30th! Maybe you can also take the high-speed train too?

TTFN

Roy Boxwell

2025-04 COMPRESS you like?

This month, I wish to do a brief overview of the pros and cons of using COMPRESS YES at the tablespace and index level.

Starting with Tablespaces

Compression came in with DB2 V3 and required a special processor for the CMPSC instruction to work at any really useful speed. Nothing changed for a loooooong time until DB2 V9 when XML compression was added. Then, in DB2 10, “partial compression” came. This includes building the dictionary after 1.2MB data has been inserted by the LOAD utility and afterwards, in Db2 12 FL504, we got a brand-new compression routine, only for UTS, called “Huffman” which also renamed the “old” routine to “fixed length”. Then, in FL509, Huffman got fully externalized into the Db2 Catalog. Full utility support was also introduced. All of these methods require the creation, and maintenance, of a dictionary, where Db2 can look up the byte codes/words that actually do the compression.

A new ZPARM as well

Naturally, to specify the default compression routine in use, a new ZPARM appeared: “TS_COMPRESSION_TYPE” – If set to HUFFMAN, then you could simply issue an ALTER like: ALTER TABLESPACE ROY.BOY COMPRESS YES, but if not set to Huffman, then the ALTER must be specific: ALTER TABLESPACE ROY.BOY COMPRESS YES HUFFMAN to get the new method.

Indexes

Indexes came in DB2 V9 and is not really the same as tablespace compression at all! Firstly, there is no dictionary as it is all handled in the Db2 I/O engine. It is a “prefix” compression really and forces a larger index bufferpool size to get it done – this has benefits and costs, of course. The major difference is that the indexes are always compressed on disk down to 4k page size and expanded in the bufferpool up to the 8K, 16K or 32K size you have given them. The other thing to remember, is that index compression is only for leaf pages – nothing else gets compressed.

XML

XML spaces supported “fixed length” compression in DB2 V9 but also needed a special processor to actually work.

LOB

Yes, you can nowadays also compress LOBs in Db2 12 and above, but it needs a zEDC Express feature installed to do so.

zEDC requires the following:

  • z/OS® V2R1 (or later) operating system.
  • One of the following:
    • IBM® z15®, or later, with the Integrated Accelerator for zEDC
    • IBM zEnterprise® EC12 CPC (with GA2 level microcode) or zBC12 CPC, or later, with the zEDC Express feature.
  • zEDC software feature enabled in an IFAPRDxx parmlib member. For more information, see Product Enablement for zEnterprise Data Compression.
  • Adequate 64-bit real storage that is configured to this z/OS image.

Huffman?

The first compression algorithms were Lempel-Ziv and the dictionary, of typically 4096 entries, each has a 12 bit “key” for the value to be replaced. This worked great, but over the years newer methods have appeared. One of them is the Huffman Entropy Compression. It is similar to Lempel-Ziv but sorts the data in the dictionary on frequency before assigning a variable number of bits to the value. (This is why the old method is called “fixed length” of course!) So, if you have millions of THE it would compress down to one single bit! A huge win over the 12 bits for Lempel-Ziv. But remember the great sayings: “It Depends” and “Your mileage my vary,” as not all data is good for compression and certainly not all data is good for Huffman compression.

Suck it and see!

IBM supplied a very nice utility, way back in the day, called DSN1COMP which has become very good over the years and is much more user friendly than when it first came out! Basically, you give it the VSAM cluster name of the object you wish to test. A handful of parameters to give it a clue about what you have defined (FREEPAGE, PCTFREE, PAGESIZE and DSSIZE), and then Bob’s Your Uncle – you get a nice report:

DSN1COMP Idiots Guide

The basic JCL looks like this:

//DSN1COMP EXEC PGM=DSN1COMP,                                        
// PARM='PCTFREE(0),FREEPAGE(0),ROWLIMIT(9999),REORG'                
//STEPLIB  DD DISP=SHR,DSN=xxxxxx.SDSNEXIT.xxxx                       
//         DD DISP=SHR,DSN=xxxxxx.SDSNLOAD                           
//SYSPRINT DD SYSOUT=*                                               
//SYSUT1   DD DISP=SHR,DSN=xxxxxx.DSNDBD.dbname.tbspace.I0001.A001

The parameters are quite long and complex…

PAGESIZE(nnK) – Must be the page size of the object you are running against. Get it wrong and you “might produce unpredictable results”.

DSSIZE(nnnG) – Must be the DSSIZE of the object you are running against. Get it wrong and you “might produce unpredictable results”.

If you omit these two options, DSN1COMP will attempt to extract the data from the header page. Depending on how old the VSAM/Full image copy dataset is, this data might be found – or not!

NUMPARTS(nnnn) – DSN1COMP assumes the object is not partitioned. Get it wrong and you “might produce unpredictable results”. For UTS data, this parameter is not used as DSSIZE takes care of the requirement.

FREEPAGE(nnn) – This must be set to the current FREEPAGE of the object. From 0 – 255. Default is 0.

PCTFREE(nn) – This must be set to the current PCTFREE of the object. From 0 to 99. Default is 5.

MAXROWS(nnn) – This must be set to the current MAXROWS of object. From 1 to 255. Default is 255.

FULLCOPY – Informs DSN1COMP that the dataset is a full image copy dataset. If it is a partitioned dataset then you must also use the NUMPARTS parameter.

REORG – This switches DSN1COMP from “LOAD” mode to “REORG” mode and generally gives a more accurate compression report as DSN1COMP then simulates full record compression and not “after 1.2MB rows” or whatever internal threshold is met. Even using REORG, you might not get a perfect match with the real compressed data as DSN1COMP uses sampling. Not valid for LOB spaces!

LOB – Informs DSN1COMP that it is now working with a LOB space – If used, *no* other parameters are allowed! You must have the zEDC for this option!

COMPTYPE(x-x) – HUFFMAN, FIXED or ALL. If not specified, DSN1COMP will check for hardware support and output HUFFMAN and FIXED, otherwise just FIXED.

ROWLIMIT(n-n) – An absolute must! From 1 to 99,000,000. Forget this and DSN1COMP will trundle through the *entire* dataset! I would set this to 99999 to begin with.

EXTNDICT(xxxxxxxx) – An eight-byte name for a generated externalized object deck compression dictionary to DD card DSN1DICT. This is not normally used.

LEAFLIM(n-n) – Limits the number of index leaf pages that DSN1COMP reads to calculate index compression rates. From 1 to 99,000,000. Default is all index leaf pages. Remember that index compression only compresses leaf pages. Note that this is the only parameter allowed for index datasets.

How it looks

DSN1999I START OF DSN1COMP FOR JOB BOXWELL$ STEP1                         
DSN1998I INPUT DSNAME = xxxxxx.DSNDBD.dbname.tbspace.I0001.A001  , VSAM
DSN1944I DSN1COMP INPUT PARAMETERS                                        
         INPUT DATA SET CONTAINS NON-COMPRESSED DATA                      
           4,096  DICTIONARY SIZE USED                                    
               0  FREEPAGE VALUE USED                                     
               0  PCTFREE VALUE USED                                      
                  COMPTYPE(ALL) REQUESTED                                 
           9,999  ROWLIMIT REQUESTED                                      
                  ESTIMATE BASED ON DB2 REORG METHOD                       
             255  MAXROWS VALUE USED                                      
DSN1940I DSN1COMP COMPRESSION REPORT                                                            
  HARDWARE SUPPORT FOR HUFFMAN COMPRESSION IS AVAILABLE                                         
  +------------------------------+--------------+------------+------------------+
  !                              !              ! Estimated  ! Estimated state  !
  !                              ! UNCOMPRESSED ! Compressed ! Compressed       !
  !                              !              ! FIXED      ! HUFFMAN          !
  +------------------------------+--------------+------------+------------------+
  ! DATA (IN KB)                 !        2,269 !        582 !              506 !
  ! PERCENT SAVINGS              !              !         74%!               77%!
  !                              !              !            !                  !
  ! AVERAGE BYTES PER ROW        !          235 !         62 !               54 !
  ! PERCENT SAVINGS              !              !         73%!               77%!
  !                              !              !            !                  !
  ! DATA PAGES NEEDED            !          589 !        154 !              134 !
  ! PERCENT DATA PAGES SAVED     !              !         73%!               77%!
  !                              !              !            !                  !
  ! DICTIONARY PAGES REQUIRED    !            0 !         64 !               64 !
  ! ROWS ... TO BUILD DICTIONARY !              !      1,149 !            1,149 !
  ! ROWS ... TO PROVIDE ESTIMATE !              !      9,999 !            9,999 !
  ! DICTIONARY ENTRIES           !              !      4,096 !            4,080 !
  !                              !              !            !                  !
  ! TOT PAGES (DICTNARY + DATA)  !          589 !        218 !              198 !
  ! PERCENT SAVINGS              !              !         62%!               66%!
  +------------------------------+--------------+------------+------------------+
                                                                                                
DSN1994I DSN1COMP COMPLETED SUCCESSFULLY,            605  PAGES PROCESSED                        

I have edited the report to make it a bit thinner!

As you can easily see, the original dataset is 589 Pages. After “normal” compression it is down to 218, which is a 62% reduction. However, with Huffman it squeezes down even more into 198 or 66%. So, according to my golden rule, it fits well! Once done and actioned by a REORG, remember to performance-test, as compression can bite you. The SYSIBM.SYSTABLES column PCTROWCOMP is used by the Db2 Optimizer for access plan selection! In other words: Test, Test and Test!

How do Indexes look?

JCL is similar, but different of course, due to the lack of parameters!

//STEP1  EXEC PGM=DSN1COMP,                                          
// PARM='LEAFLIM(9999)'                                              
//STEPLIB  DD DISP=SHR,DSN=xxxxxx.SDSNEXIT.xxxx                       
//         DD DISP=SHR,DSN=xxxxxx.SDSNLOAD                           
//SYSPRINT DD SYSOUT=*                                               
//SYSUT1   DD DISP=SHR,DSN=xxxxxx.DSNDBD.dbname.ixspace.I0001.A001
DSN1999I START OF DSN1COMP FOR JOB BOXWELL$ STEP1                         
DSN1998I INPUT DSNAME = xxxxxx.DSNDBD.dbname.ixspace.I0001.A001  , VSAM
                                                                          
DSN1944I DSN1COMP INPUT PARAMETERS                                        
                  PROCESSING PARMS FOR INDEX DATASET:                     
           9,999  LEAF LEAFLIM REQUESTED                                  
                                                                          
DSN1940I DSN1COMP COMPRESSION REPORT                                      
                                                                          
           9,999  REQUESTED LEAF LIMIT REACHED                            
           9,999  Index Leaf Pages Processed                              
         621,333  Keys Processed                                          
         621,333  Rids Processed                                          
          32,096  KB of Key Data Processed                                
          11,947  KB of Compressed Keys Produced       

    EVALUATION OF COMPRESSION WITH DIFFERENT INDEX PAGE SIZES
                                                             
    ----------------------------------------------           
 8  K Page Buffer Size yields a                              
51  % Reduction in Index Leaf Page Space                     
    The Resulting Index would have approximately             
49  % of the original index's Leaf Page Space                
    No Bufferpool Space would be unused                      
    ----------------------------------------------           
                                                             
    ----------------------------------------------           
16  K Page Buffer Size yields a                              
63  % Reduction in Index Leaf Page Space                     
    The Resulting Index would have approximately             
37  % of the original index's Leaf Page Space                
32  % of Bufferpool Space would be unused to                 
    ensure keys fit into compressed buffers                  
    ----------------------------------------------           
                                                             
    ----------------------------------------------           
32  K Page Buffer Size yields a                              
63  % Reduction in Index Leaf Page Space                     
    The Resulting Index would have approximately             
37  % of the original index's Leaf Page Space                
66  % of Bufferpool Space would be unused to                 
    ensure keys fit into compressed buffers                  
    ----------------------------------------------           
DSN1994I DSN1COMP COMPLETED SUCCESSFULLY,          9,999  PAGES PROCESSED                   

Here you see the huge difference in output!

The DSN1COMP computes the three possible new page sizes for your index (8, 16 and 32), which enables you to decide which one is “best” for the index to live in. In this example, the 8K BP is the clear winner as no space is wasted and it provides over 50% savings.

Problems?

The Db2 optimizer does *not* “know” that the index is compressed, but it *does* know which bufferpool you have moved it to. Remember to correctly size all of these new bufferpools so that you do not get paging or excessive flushing!

All clear?

Now it is clear: You must still weigh up the pros and cons here! Do not blindly compress the world and expect to save TBs of disk space! The CPU required to decompress/compress is not free, and the Db2 I/O Engines must also work for Index Compression.

Rules Of Thumb

Ignore very small rows – typically, it makes no sense to compress row sizes under 16 at all! Remember you can still only get a maximum of 255 rows in a page (even if compressed!) and so very small rows would hit that limit and make the exercise pointless.

Very large rows – take a row that is 4,000 and let’s say it compress at 45%, leaving you with 2,200 – It still only gets one row in a page. (Previous 4,000 fits due to the page size limit of 4,054 and now with 2,200 still only one is allowed!) In this case, I would change the bufferpool to actually realize any benefits.

There are cases where compressing (or using Huffman to compress more) can negatively affect an application that is using page-level locking.

Obviously, do not compress tablespaces defined with MAXROWS=1.

If the space saving is less than 10% – 20%, it generally makes no sense to use compression.

Db2 only actually does the compression if the row will then be shorter – It could well be, with lots of timestamp fields, that it ends up being the same size and so Db2 saves you CPU by not compressing it.

Compression is *not* encryption and should not be used as such! Furthermore, compressing encrypted data is probably pointless.

LOAD inserts data until a sweet spot is reached, and then it starts to insert compressed data (True for INSERT, MERGE and LOAD SHRLEVEL CHANGE of course).

REORG builds the dictionary in the UNLOAD phase so all rows, if eligible, will be compressed.

To share or not to share?

Sharing dictionaries can be a great idea or a terrible idea – your data decides! In the LOAD statement, if you are loading at PARTITION level, you may add “COPYDICTIONARY nnnn” to save the time of completely building a dictionary. This only makes sense if you *know* that the dictionary is good for your partition as well.

Keeping Dictionaries

You may decide to save all the time in building dictionaries by adding KEEPDICTIONARY to the LOAD or REORG utility cards. The only problem here, is sometimes data does change and a new dictionary will be required.

And what about Indexes?

Well, I tend to like the larger bufferpool here just to help reduce index page splits. So going to 8K and above can have a very nice knock-on effect into general performance and logging.

Looking for Candidates?

I would start with all big table partitions (over 199MB) that are not compressed at all and also perhaps take another look at any table partitions that compressed badly. Perhaps Huffman or rebuilding the dictionary could help? For indexes I would just list out the uncompressed ones larger than 99MB. Feel free to change the values!

Here’s some simple SQL to list out the interesting cases. For table partitions and tablespaces:

SELECT RTS.DBNAME                        
     , RTS.NAME                          
     , RTS.PARTITION                     
     , RTS.SPACE / 1024 AS MB            
     , TS.INSTANCE                       
     , TP.IPREFIX                        
     , TP.COMPRESS                       
     , TP.PAGESAVE                       
FROM SYSIBM.SYSTABLESPACESTATS RTS       
    ,SYSIBM.SYSDATABASE        DB        
    ,SYSIBM.SYSTABLESPACE      TS        
    ,SYSIBM.SYSTABLEPART       TP        
WHERE     RTS.DBNAME       = TS.DBNAME   
  AND NOT RTS.DBNAME    LIKE 'DSNDB0_'   
  AND     RTS.NAME         = TS.NAME     
  AND     RTS.DBNAME       = DB.NAME     
  AND     RTS.INSTANCE     = TS.INSTANCE 
  AND NOT DB.TYPE          = 'W'         
  AND     RTS.DBNAME       = TP.DBNAME   
  AND     RTS.NAME         = TP.TSNAME   
  AND     RTS.PARTITION    = TP.PARTITION
  AND     RTS.SPACE / 1024 > 199         
  AND    (TP.COMPRESS      = ' '         
      OR (TP.COMPRESS     IN ('Y' , 'F') 
      AND TP.PAGESAVE      < 20))        
ORDER BY 4 DESC, 1 , 2 , 3               
FOR FETCH ONLY                           
WITH UR                                  
;                                        

For indexes:

SELECT RTS.DBNAME                        
     , RTS.INDEXSPACE                    
     , RTS.PARTITION                     
     , RTS.SPACE / 1024 AS MB            
     , RTS.NLEAF                         
     , RTS.INSTANCE                      
     , IP.IPREFIX                        
FROM SYSIBM.SYSINDEXSPACESTATS RTS       
    ,SYSIBM.SYSINDEXES         IX        
    ,SYSIBM.SYSINDEXPART       IP        
WHERE     RTS.NAME         = IX.NAME     
  AND NOT RTS.DBNAME    LIKE 'DSNDB0_'   
  AND     RTS.CREATOR      = IX.CREATOR  
  AND     RTS.PARTITION    = IP.PARTITION
  AND      IX.CREATOR      = IP.IXCREATOR
  AND      IX.NAME         = IP.IXNAME   
  AND     RTS.SPACE / 1024 > 99          
  AND      IX.COMPRESS     = 'N'         
ORDER BY 4 DESC, 1 , 2 , 3               
FOR FETCH ONLY                           
WITH UR                                  
;                                        

What are your thoughts or experiences with compression? I would love to hear from you!

TTFN,

Roy Boxwell

2025-03 You deprecated, are?

This month I wish to bring along a few things of future interest. It all started in Valencia at the IDUG EMEA in 2024, shortly before the catastrophic floods. Haakon Roberts held his “trends and directions” presentation at the very beginning and this text was in his slide deck on page 4:

Now, we all know that IBM *never* announces “Db2 V14 is coming soon!” But they do talk about Vnext, and it is pretty obvious that this list will stop you migrating from Db2 13 to Db2 14 if any of these items exist or, even worse, are in use at your shop.

What Can You Do?

Well, way back in 2020, I wrote a Migration HealthCheck for Db2 z/OS program as our yearly “give-away” and guess what? It does all the stuff in that list *apart* from VTAM. Not only that, but in the intervening years Audit has grown and grown in importance. Then, in January 2025, DORA came out, for which I wrote another yearly “give-away” called SecurityAudit HealthCheck for Db2 z/OS . This checks everything an auditor would like to have checked, and *also* checks for VTAM/SNA usage in the Communication Database (CDB).

VTAM/SNA??

Cast your mind back about 25 years or so, and you might well remember that TCP/IP was some weird new-fangled way of sending “packets of information” – Not something responsible mainframers did at all! All we had was VTAM/SNA and 3270 with green screens – It was (is!) enough for us!

The Cheque is in the Post …

These days, VTAM/SNA has long overstayed its welcome, as it was a “I trust you” style of communication. The rationale was, “If you have logged onto one Db2 system, I must trust you on any other Db2 system – Who would lie to me?” So, it is not recommended any more and, in fact, with DORA and other Audit requirements it is a *bad* idea to even allow it to be “on its perch” – it must be carried out and buried!

When the B was big!

Back in DB2 V9 (remember when the B was BIG?), IBM brought in the IPNAME “ZPARM” to enable the complete disabling of VTAM/SNA communication in the DDF as it is was known, even way back then, to be an inherent security risk.

Why “ZPARM”?

Well, here is another twist to the story: IBM introduced this like a ZPARM but it is actually a parameter of the DDF and is stored in the Bootstrap Dataset (BSDS). So, run a DSNJU004 and you might see something like this:

LOCATION=xxxxxxxx IPNAME=(NULL) PORT=nnnn SPORT=nnnnn RPORT=nnnn
ALIAS=(NULL)                                                   
IPV4=xxx.xxx.xxx.xxx IPV6=NULL                                    
GRPIPV4=xxx.xxx.xxx.xxx GRPIPV6=NULL                              
LUNAME=xxxxxxxx PASSWORD=(NULL) GENERICLU=(NULL)


Here you can plainly see the IPNAME=(NULL) telling you it is not set and thus allows VTAM/SNA.

When DDF starts, it reports all of its parameters, slightly differently of course just to be awkward, in the xxxxMSTR output:

12.15.25 STC04347  DSNL003I  -xxxx DDF IS STARTING       
12.15.26 STC04347  DSNL004I  -xxxx DDF START COMPLETE 
   605                        LOCATION  xxxxxxxx         
   605                        LU        xxxxxxxx.xxxxxxxx
   605                        GENERICLU -NONE            
   605                        DOMAIN    xxx.xxx.xxx.xxx  
   605                        TCPPORT   nnnn             
   605                        SECPORT   nnnnn            
   605                        RESPORT   nnnn             
   605                        IPNAME    -NONE            
   605                        OPTIONS:                   
   605                         PKGREL = COMMIT

Here VTAM/SNA usage is not disallowed as IPNAME is -NONE. You can also issue the -DIS DDF command on your stand-alone Db2 subsystem or on all members of your Db2 data-sharing group and verify that the output looks like:

DSNL080I  xxxxx DSNLTDDF DISPLAY DDF REPORT FOLLOWS:          
DSNL081I STATUS=STARTD                                        
DSNL082I LOCATION           LUNAME            GENERICLU       
DSNL083I xxxxxxxx           xxxxxxxx.xxxxxxxx -NONE           
DSNL084I TCPPORT=nnnn  SECPORT=nnnnn RESPORT=nnnn  IPNAME=-NONE
DSNL085I IPADDR=::xxx.xxx.xxx.xxx

DSNL084I value IPNAME=-NONE is what you must look for and hopefully *not* find!

Now, in another subsystem, I have IPNAME set so the BSDS print looks like:

LOCATION=xxxxxxxx IPNAME=yyyyyyy PORT=nnnn SPORT=nnnnn RPORT=nnnn
ALIAS=(NULL)                                                    
IPV4=xxx.xxx.xxx.xxx IPV6=NULL                                     
GRPIPV4=xxx.xxx.xxx.xxx GRPIPV6=NULL                               
LUNAME=xxxxxxxx PASSWORD=(NULL) GENERICLU=(NULL)

and when DDF starts it reports:

12.15.36 STC04358  DSNL003I  -xxxx DDF IS STARTING  
12.15.37 STC04358  DSNL004I  -xxxx DDF START COMPLETE
   713                        LOCATION  xxxxxxxx    
   713                        LU        -NONE       
   713                        GENERICLU -NONE       
   713                        DOMAIN    -NONE       
   713                        TCPPORT   nnnn        
   713                        SECPORT   nnnnn       
   713                        RESPORT   nnnn        
   713                        IPNAME    yyyyyyy     
   713                        OPTIONS:              
   713                         PKGREL = COMMIT      
   713                         WLB = DFLT

The IPNAME is set to something and the LU is not set, even though it *is* set in the BSDS – much better!

Again, you can also issue the -DIS DDF command on your stand-alone Db2 subsystem or on all members of your Db2 data-sharing group and verify that the output looks like:

DSNL080I  xxxxx DSNLTDDF DISPLAY DDF REPORT FOLLOWS:               
DSNL081I STATUS=STARTD                                             
DSNL082I LOCATION           LUNAME            GENERICLU         WLB
DSNL083I xxxxxxxx           -NONE             -NONE             DFLT
DSNL084I TCPPORT=nnnn  SECPORT=nnnnn RESPORT=nnnn  IPNAME=yyyyyyy  
DSNL085I IPADDR=::xxx.xxx.xxx.xxx

Now in the DSNL084I the value IPNAME=yyyyyyy is what you must look for and hopefully find!

Delete Delete Delete

The last piece of the VTAM/SNA puzzle, is to clean up your CDB afterwards just to make sure no old and unused, and now completely unusable, definitions are lying around. They annoy auditors and so I would recommend deleting all of the VTAM/SNA definitions that you might still have. Our DORA give-away last year (SecurityAudit HealthCheck) listed all these out for you to do exactly that. It is well worth downloading and running as it is free! Putting it simply, just review, and then delete, all the rows in the tables SYSIBM. LULIST, LUMODES, LUNAMES and MODESELECT.

Hooray!

With these two freebies you can easily check if your systems are:

  1. Db2 Vnext ready
  2. DORA and PCI DSS V4.0.1 Compliant!

Pretty cool for nothing huh?

TTFN

Roy Boxwell

2025-02 It SIEMs good to me!

Hi! Excuse the horrible pun, but the SIEM world has intruded into my quiet COBOL / SQL world over the last month or two quite badly!

SIEM?

You hear it a lot but what exactly is it? It is Security Information and Event Management, hence SIEM. It is a field within computer security that combines Security Information Management (SIM) and Security Event Management (SEM) which enables real-time analysis of security alerts generated by applications and network hardware.

DORA

DORA kicked it all off for me with the EU bringing in brand new laws to make FinTech more resilient against cyber threats which in turn means doing more auditing and testing of the complete infrastructure.

WorkLoadExpert

Now SQL WorkloadExpert for Db2 z/OS (WLX) was designed many many moons ago for performance monitoring and tuning SQLs but over the last five to six years has slowly turned into an auditing tool. Naturally, we collect all the SQL on your machine to do the performance monitoring so this is a very good start for doing audit and test. Then we added more and more IFCID data to finally get the complete audit picture with our WLX Audit for Db2 z/OS.

A bit LEEFy on the Street…

This was all well and good but then of course came the time when the mainframe did not stand alone (At least from the auditor’s point of view!) which meant we had to create a method to transfer the data required for audit from the mainframe down to “the boxes you can carry”. The first way we developed was using the LEEF record format.

LEEF?

The Log Event Extended Format (LEEF) is a customized event format for IBM® Security QRadar®. QRadar can integrate, identify, and process LEEF events. LEEF events must use UTF-8 character encoding. Looking at the EBCDIC native data:

This is OPUT down to USS and then looks like:

Which is then iconv’d into UTF-8 which (after a DISPLAY UTF in browse) looks nearly the same:

The final step, for LEEF processing, was then a GZIP call which dramatically shrinks the size of the file but, of course, is not human readable any more:

CEF?

SPLUNK then came along…I love these names! So, we added the Common Event Format (CEF) to make it work deep down in the caves… That is mentioned in the company profile by the way: From its founding in 2003, Splunk has helped organizations explore the vast depths of their data like spelunkers in a cave (hence, “Splunk”).

LOGger heads?

Then we had some customers who did not want LEEF/CEF format but wanted a “direct to analyze tool” solution using SYSLOGGER and JSON. We then created the SYSLOGGER API which TCP/IP connects directly with your system logger and directly sends the data. Which, in trace mode, looks like this:

A Bit of a Stretch?

What we now have is even more customers asking for a mix of both of these systems. So, a JSON payload but in a flat file that can be directly ingested by Elastic… The things I do!!!

Here’s how the Elastic Common Schema (ECS) data looks when looking at the EBCDIC native data:

Just like LEEF it is then OPUT down to USS and iconv’d to UNICODE so it eventually looks like:

Here with Browse you can see the data again:

JSON Format?

Naturally, you can then ingest it as a JSON file which looks a little like this:

All done?

Probably never! But what SIEM systems are you all using out there? Have you all been “hit by the Audit” bug? I would love to hear your war stories!

TTFN,

Roy Boxwell

2025-01 Compress and Replicate This!

This month, I want to delve into one of those newish, but well-hidden, features of Db2 that arrived some years ago but did not get that much publicity!

Compress Yes!

We all love compression! It saves tons of disk space and, over the years, we even got Index Compression. Yes, I know it is *not* really compression at all – just the leading bytes – but it is better than nothing! We also got better, newer, Table compression algorithms (Looking at you Mr. Huffman!) and we always had the problems of un-compressing data if “we” needed to look at it off-line.

Dictionaries Anymore?

Db2, at least for Tables, uses dictionaries of varying sizes to save “common strings” mapped to small numbers. These dictionaries are pretty obviously only valid for one set of data from one table and/or partition. If you do a REORG after inserting a ton of new data then the new dictionary can be very different from the prior one. Db2 writes the *previous* dictionary to the log and we carry on fine …

Or Do We?

Well, what happens when you wish to read data from the log that was inserted *before* your current dictionary was created? This is, and will always be, a small technical problem! The programs out there now simply crawl back through the Db2 log looking for the compression dictionary that matches to your timescales. This obviously takes time, I/O and CPU!

Online?

Even if the dictionary you are looking for is the current one, accessing it might well cause DBD Locks to happen when it has to be opened, and it can even cause Db2 GBP dependency problems. The fun continues if the logs you need have been archived to tape, of course.

DCC

Data Capture Changes are the keywords here. DCC for the standard TLA. As you are probably aware, setting this at the table level enables Db2 to support data replication using the log. IBM Db2 call their version DataPropagator. The DDL syntax in CREATE/ALTER TABLE is DATA CAPTURE NONE/CHANGES:

DATA CAPTURE Specifies whether the logging of the following actions on the table includes additional information to support data replication processing:

• SQL data change operations

• Adding columns (using the ADD COLUMN clause)

• Changing columns (using the ALTER COLUMN clause)

NONE Do not record additional information to the log. This is the default.

CHANGES Write additional data about SQL updates to the log.

This Changes Everything!

Indeed, it does! DATA CAPTURE CHANGES (DCC and also CDC sometimes!) causes a full row to be logged for every update. If there was an update of just the very last byte of data in a row then it was a tiny log record of basically just that last byte. With DCC you always get the full row. For INSERT and DELETE you always got the full row anyway. Why the change? So that the replication software has it easy! Simple as that!

Side Effects?

There are a few side effects, of course. Mainly, the log size grows as you are logging more data, and any use of the SQL TRUNCATE will behave like a MASS DELETE (so no IGNORE DELETE TRIGGERS or IMMEDIATE options, for example). There are also a few ZPARMs that must all be reviewed and/or changed. Firstly, the ZPARM UTILS_BLOCK_FOR_CDC with options YES/NO and default NO. If set to NO then no utilities are barred from working on these DCC tables. If set to YES then:

  • CHECK DATA with DELETE YES LOG NO
  • LOAD with SHRLEVEL NONE or REFERENCE
  • RECOVER with TOLOGPOINT, TORBA, TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY
  • REORG TABLESPACE with DISCARD
  • REPAIR with LOCATE DELETE

Terminated with Extreme Prejudice!

Will all be terminated. Why, you may wonder? Well, all of these can obviously *flood* the log with rows, millions of rows of data. If using data replication then their usage should be “evaluated” before attempting them – hence the ZPARM. Too many customers shot themselves in the foot with a badly timed LOAD, for example, so we got this “protection” ZPARM. Naturally, it is changeable online, so once you have reviewed the requirement for the utility you can still fire it off, if desired, and then, very quickly, switch back!

More?

Yes, there is always more! ZPARM REORG_DROP_PBG_PARTS with options DISABLE/ENABLE and default DISABLE. If set to ENABLE to allow REORG to drop empty PBG partitions after a successful REORG, and the table within is DCC, then this drop of empty partitions will be ignored! Finally, we have the ZPARM RESTRICT_ALT_COL_FOR_DCC with options YES/NO and default NO. It specifies whether to allow ALTER TABLE ALTER COLUMN for DCC tables. If it is set to YES you will get an SQLCODE -148 if ALTERing a DCC object using any of the options SET DATA TYPE, SET DEFAULT or DROP DEFAULT. Again, this is to stop accidental log and data replication flooding!

Compression Woes

So now you can fully see the quandary! If the rows are all compressed on the log you must access the dictionary to un-compress them on the “other side”, where you do the replication, or just locally if you want to see the data. As mentioned, getting our grubby little hands on the dictionary can cause performance issues so what can we do?

CDDS!

This arrived in Db2 11 for data-sharing people out there with a, and I quote, “GDPS® Continuous Availability with zero data loss environment.” It actually stands for Compression Dictionary Data Set and it brought in a couple of new ZPARMs:

  • CDDS_MODE with options NONE, SOURCE and PROXY and default NONE.
  • CDDS_PREFIX a 1 – 39 Byte z/OS dataset prefix for the VSAM Dataset where the “magic” happens.

VSAM???

Yep you read that right! The CDDS VSAM dataset (CDDS_PREFIX.CDSS) stores up to three versions of the dictionaries that your table/tableparts are using, thus enabling fast, problem-free access to the dictionaries. Naturally, the CDDS must be available to *both* systems – and this is why VSAM was chosen, of course!

Start Me Up!

Some new commands also came in here. -START CDDS instructs all members of a data-sharing group to allocate the CDDS and start using it. -STOP CDDS tells all members to stop using CDDS, close and deallocate it. This must be done before recovering the CDDS, for example.

So???

So now you have a possible way of getting great performance in data replication, but how do you get it all running and how do you review what’s going on? This is where REORG and a new standalone utility DSNJU008 come in.

Reorg Yes But No…

The way to initialize your CDDS is simply to REORG all of your tables/table partitions  with the INITCDDS option. Do not panic! If this option is set to YES it will *not* do a REORG! It will purely externalize the current dictionary and carry on. This means you can use a “generator” SQL statement like this:

SELECT DISTINCT 'REORG TABLESPACE ' CONCAT STRIP(A.DBNAME)
                        CONCAT '.' CONCAT STRIP(A.TSNAME)
                        CONCAT ' INITCDDS YES'
FROM SYSIBM.SYSTABLES A
WHERE A.DATACAPTURE = 'Y'
;

To get all your REORG cards, and then one mass pseudo Reorg later your CDDS is up and running!

Standalone

The DSNJU008 utility can be executed using JCL like this:

//CDDSPRIN EXEC PGM=DSNJU008

//STEPLIB  DD DSN=<Your Db2 exit lib>,DISP=SHR

//         DD DSN=<Your Db2 load lib>,DISP=SHR

//SYSUT1   DD DSN=<Your CDDS prefix>.CDDS,DISP=SHR

//SYSPRINT DD SYSOUT=*

//SYSIN    DD *

LIMIT(9999)

/*

Please refer to the Db2 for z/OS Utilities Guide and Reference for all the details about other keywords, but the interesting factoid is that VERSIONS are stored 1, 2, 3 with 1 being the most recent and the rest going backwards in time.

What Time Was It?

The program outputs everything you could ever wish for about your data sharing systems’ use of compression dictionaries. Including the Dictionary timestamp, which I find very nice as every now and again it really can pay to analyze the data with a new compression test, just to see if you can squeeze any more onto your SSDs!

Why All This Trouble, Again?

The idea behind all of this work is to make using IFCID 306 better, faster and cheaper. It also has a knock-on affect into the “vendor scape”, as lots of vendors offer tooling for data replication or log reading/analysis and they all should now work with this feature enabling you, the DBA, to be more productive at less cost. If their version of REORG/LOAD does *not* handle the CDDS then you must always schedule a new pseudo Reorg with INITCDDS afterwards to handle the VSAM Updates correctly.

Using It?

My question for this month, dear readers, is: Are any of you using this feature or are planning to use it soon?

TTFN

Roy Boxwell

2024-12 Security & Audit Check

Hi all! Welcome to the end-of-year goody that we traditionally hand out to guarantee you have something to celebrate at the end-of-year party! This time, I wish to introduce to you a full vulnerability check of your Db2 for z/OS systems.

DORA!

You should all be aware, and scared, of DORA by now. If not, read my prior newsletter 2024-11 DORA or check out my IDUG presentation or my recorded webinar. Whatever you do, you must get up to speed with DORA as it comes into force on the 17th Jan 2025 which is only one month away from the publishing date of this newsletter!

Not just Born in the USA!

Remember, DORA is valid for the whole wide world, not just businesses within the EU. If you do *any* sort of financial trading within the EU you are under the remit of DORA, just like you are with GDPR! Even if you are not trading within the EU block, doing a full vulnerability check is still a pretty good idea!

PCI DSS V4.0.1

We also now have the Payment Card Industry Data Security Standard (PCI DSS) V4.0.1 going live at the end of March 2025… Coincidence? I don’t think so. Mind you, at least the Americans do not fine anyone who fails!

What are We Offering?

This year, the product is called the SecurityAudit HealthCheck for Db2 z/OS or SAC2 for short. It is a very lightweight and robust tool which basically does all of the CIS Vulnerability checks as published by the Center for Internet Security (CIS) in a document for Db2 13 on z/OS:

CIS IBM Z System Benchmarks (cisecurity.org)

https://www.cisecurity.org/benchmark/ibm_z

This contains everything you should do for audit and vulnerability checking and is well worth a read!

Step-By-Step

First Things First!

The first thing SAC2 does, is evaluate *all* security-relevant ZPARMs and report which ones are not set to a “good” value. It then goes on to check that any default values have not been left at the default value. This especially means the TCP/IP Port number, for example. Then it finishes off by validating that SSL has been switched on for TCP/IP communications and that any and all TCP/IP ALIAS defs also have the correct settings.

Communication is Important!

Next up, is a full evaluation of your Communication Data Base (CDB). This data has been around for decades and started life for SNA and VTAM connections between Host Db2s. These days, SNA is dead and most of the connections are coming from PCs or Servers. That means that there *could* be a lot of dead data in the CDB and, even worse, ways of connecting to your mainframe that you did not even know, or forgot, existed! Think plain text password with SQLID translation for example!

Danger in the Details

Naturally, blindly changing CDB rows is asking for trouble, and if SAC2 finds anything odd/old/suspicious here, you must create a project to start removal. There is a strong correlation between “Oh I can delete that row!” and “Why can’t any of my servers talk to the mainframe anymore?”. The tool points out certain restrictions and pre-reqs that have to be done *before* you hit the big button on the wall! JDBC version for example.

Taking it All for GRANTed?

GRANTs can be the root of all evil! GRANT TO PUBLICs just make auditors cry, and use of WITH GRANT OPTION makes them jump up and down. Even IBM is now aware that blanket GRANTing can be dangerous for your health! SAC2 analyzes *all* GRANTs to make sure that PUBLIC ones are discovered on the Catalog and Directory as these should NEVER be done (with the one tiny exception of, perhaps on a good day when the sun is shining, the SYSIBM.SYSDUMMY1), then further checking all User Data as PUBLIC is just lazy. Checking everything for WITH GRANT OPTION is just making sure you are working with modern security standards!

Fun Stuff!

These days you should be using Trusted Contexts to access from outside the Host. This then requires Roles and all of this needs tamper-proof Audit Policies. On top of all this are the extra bits and pieces of Row Permissions and Column Masks. All of these must be validated for the auditors!

Elevated Users?

Then it lists out the group of privileged User IDs. These all have elevated rights and must all be checked in detail as who has what and why?

Recovery Status Report

Finally, it lists out a full Recovery Status Report so that you can be sure that, at the time of execution, all of your data was Recoverable.

It is a Lot to Process!

It is indeed. The first time you run it, you might well get tens of thousands of lines of output but the important thing is to run it and break it down into little manageable sections that different groups can then work on. This is called “Due Diligence” and can save your firm millions of euros in fines.

Lead Overseer

Great job title, but if this person requests data then you have 30 days to supply everything they request. Not long at all! SAC2 does the lion’s share of the work for you.

Again and Again and Again

Remember, you must re-run this vulnerability check on a regular basis for two major reasons:

  1. Things change – Software, Malware, Attackers, Defenders, Networks, Db2 Releases etc.
  2. Checks get updated – The auditors are alway looking for more!

Stay Out of Trouble!

Register, download, install and run today!

I hope it helps you!

TTFN

Roy Boxwell

Future Updates:

The SAC2 licensed version will be getting upgraded in the first quarter of 2025 to output the results of the run into a Comma Separated File (CSV) to make management reporting and delegation of projects to fix any found problems easier. It will also get System Level Backup (SLB) support added. SLB is good but you *still* need Full Image Copies! Further, it will be enhanced to directly interface with our WLX Audit product.

2024-11 DORA

A title that should strike fear into the hearts of all readers! Nah! I really hope not! This month, I wish to run through the parts of DORA that impact the Db2 for z/OS world mostly…

What is DORA?

Dora is the Digital Operational Resilience Act and it was created on the 14th of December 2022 with an enablement date of 17th January 2025, giving us all over two years to read it, understand it, action the requirements from it, and accept it fully into our computing culture.

How many of those things have you accomplished in the last two years? You mean you had a day job as well?

Not just Born in the USA!

DORA is valid for the whole wide world, not just businesses within the EU. If you do *any* sort of financial trading within the EU you are under the remit of DORA, just like you are with GDPR!

PCI SSC DSS new update!

As well as DORA, there is a “new” version of Payment Card Industry Security Standards Council Data Security Standard 4.0.1. It comes into force on the 31st March 2025 and it contains a lot of overlap with “our” DORA here in the EU! Here’s a link to their website.

What is the Aim of DORA?

The idea is to bring together all the disparate EU regulations into one new regulation for nearly every financial trading house (FINTEC), apart from a few exclusions e.g. microenterprises, thus simplifying the requirements and easing audit and control.

Is it Just a New Level of Audit?

Most definitely not! As the name suggests, Digital Resilience is all about surviving an attack, or a disaster, and being back and processing data (money) as soon as possible. Included within is indeed a bunch of auditable things, but I will get to them later in this newsletter.

What does DORA Cover then?

Security, Operations, Recoverability and Test. Not only these, but these are, at least for me, the biggies. The last of them – Test – is incredibly important in what they mean by “Test”. They mean Performance Test, Reliability Test and Vulnerability Test. These are not all new for us but some are. We all remember GDPR and friends, where types of data had to be “respected” otherwise you got a hefty fine. Now, with DORA, the way you work, run, update and check your systems must all be proven and reported. If you do not deliver you get – guess what? – hefty fines!

Who’s the Boss?

You might have read about this, or seen a presentation, but the absolute “boss” for this regulation is the English PDF version here:

All other versions are translated and so may have errors.

Just the Facts, Ma’am

Well, no, I am not actually going to list all the facts of DORA here, but the highlights for me are the following opening paragraphs:

46 Mandates vulnerability testing

48 Maintained systems (Current release/PTF/APAR etc.)

49 & 50 Recoverability and RTO

56 Performance, Testing and Scanning

This is a brave new world for lots of us! One of the buzzwords is ICT which is basically IT.

Book, Chapter and Verse

Chapter 2 Section II Article 6 ICT risk management framework Paragraphs 2, 4 & 6

This is all about risk management and covers unauthorized access, segregation of duties and regular internal audits.

Chapter 2 Section II Article 8 Identification Paragraphs 1, 3 & 7

Hammers home the requirement for regular updates and risk assessments after major changes.

Chapter 2 Section II Article 9 Protection and prevention Paragraphs 1, 2, 3 & 4

This is pure audit: Monitor continuously what is happening, get tooling and policies in place. Make sure that all data is secure at rest, in use and in transit. Validate authenticity and guarantee strong authentication.

This is the most important part for me – It means encrypt at rest, use SSL for *all* things remote, do not use technical user ids with passwords, use Certificates and/or use Trusted Contexts and implement at least MFA for all users. The “in use” part is a bit crazy, especially for Db2, but I am pretty sure that the Bufferpool is “trusted” and so we, as z/OS users, can ignore this part for now…

Chapter 2 Section II Article 10 Detection Paragraphs 1 & 3

Detect whether weird stuff is happening and monitor user activity!

Chapter 2 Section II Article 11 Response and recovery Paragraphs 1 & 3

Make sure you have the ability to recover quickly, and in a timely manner, making sure you have response and recovery plans all ironed out.

Chapter 2 Section II Article 12 Backup policies Paragraphs 1 & 2

Guarantee that your image copies are enough and available to minimize downtime and limit disruption. These must be regularly tested!

Chapter 4 Article 24 Testing Paragraphs 1 & 2

Test! Test and test again – at least yearly!

Chapter 4 Article 25 Testing of ICT tools and systems Paragraphs 1 & 2

Performance Testing, Penetration Testing and Vulnerability Testing.

Chapter 4 Article 26 Advanced testing and TLPT Paragraphs 1, 2 & 6

More advanced testing including a Thread-Led Penetration Test on live production systems at least every three years! All must be documented of course…

Chapter 5 Article 35 Powers of the Lead Overseer Paragraphs 1, 6, 7 & 8

Lead Overseer – I get this mixed up with Supreme Leader all the time…

The Lead Overseer is the DORA God in a given country and can simply ask for any of the details I have just listed. Failure to deliver the goods within 30 days (Calendar days, not working days!) will result in fines…

I am Fine!

Well, the fines are pretty nasty… The Lead Overseer can fine any firm up to 1% of the average daily turnover from the previous financial year. This is then compounded by the fact that the Lead Overseer can levy this fine *every* day for up to six months!!! Ouch!!!

An Example Calculation

Taking a large bank as an example, just to show the math. The turnover in 2023 was nearly 60 billion euros. Divided by 365 gives us 164 million euros per day. Taking 1% of this (worst case) gives 1.64 million euros. Assuming the Lead Overseer is being especially nasty and levels the fines for 182 days leads to around 298 million euros in fines.

I, for one, do *not* want to be the first firm in Europe getting this… and it is all in the public domain which is then a massive image loss as well!

What can I do?

Well, first up, make sure all data is encrypted at rest – This should be a no-brainer due to modern disks/SSDs anyway.

Then, make sure that *all* remote access is using the SECPORT and is being encrypted in flight – again, this should be easy but remember to then set the PORT to be the same as the SECPORT which then forces all of this. Do not forget to check your TCP/IP ALIASs!

Do a full recoverability test to guarantee that you have all the Image Copies, Logs, Archive logs that you require to actually do a full recovery. If you can also meet your RTOs here then even better! Here our RealTime DBAExpert (RTDX) software can really help out, with timely Image Copies and a verification of complete recoverability.

Audit your Db2 Systems!

Do a Vulnerability Test on your Db2 Systems!

Audit?

I have done a lot of blogs and Webinars just about Audit, so I will spare you the details, but you must actually get it done. You will almost certainly require an external auditor who does a final check/validation that your audit is good to go and then you are done. Here, our excellent WLX Audit product can suddenly become your best friend!!!

Feeling Vulnerable Today?

The Center for Internet Security (CIS) has released a document for Db2 13 on z/OS:
CIS IBM Z System Benchmarks

It contains everything you should do for Audit and vulnerability checking and is well worth a read and then action the reports within.

Docu Docu Docu

All of these things must be performed, documented and repeated on a regular basis and sometimes even after a simple “change” has occurred.

The world is a bad place and DORA is here to help us really, but the start will, as always, be a hard climb!

TTFN,

Roy Boxwell

2024-10 Soundex and other cool features part eight(!) for Db2 z/OS 12 and 13

In this, obviously, never ending series of new features, I will roll up all the new ones since my “SOUNDEX and other „cool“ features – Part seven All new for Db2 12” newsletter from 2021-05.

Starting with Db2 12 – PTFs first

APAR PH36071 added support for the SUBSTR_COMPATIBILITY ZPARM parameter with default PREVIOUS and other valid value CURRENT. In Db2 12 FL500 and above, with this APAR applied and the value set to CURRENT, then the SUBSTR Built-in Function (BiF) will return an error message for invalid input.

APAR PH42524 added MULTIPLY_ALT support to the IBM Db2 Analytics Accelerator (IDAA).

APAR PH47187 added support for UNI_90 locale in the LOWER, TRANSLATE and UPPER BiFs.

APAR PH48480 added LISTAGG and RAND to the IDAA offload support. Note: you must enter YES in ENABLE ACCELERATOR SPECIFIC RESULTS field on panel DSNTIPBA to get this boost.

Db2 12 FL 100

Yes, they introduced a new BiF for this level way after I wrote my last newsletter all about BiFs and Functions. The new BiF is the BLOCKING_THREADS table function. This is very handy when DBAs are about to start doing DDL work. Adding or ALTERing a Column or what have you! The output is a table about who is blocking access to the database in question and can really save a massive amount of work if you can check *before* you do all your ALTERs that you can indeed succeed in getting them all done!

Now Db2 13 – PTFs first

APAR PH47187 added support for UNI_90 locale in the LOWER, TRANSLATE and UPPER built-in functions (BiFs).

APAR PH48480 added LISTAGG and RAND to the IBM Db2 Analytics Accelerator (IDAA) offload support. Remember, you must enter YES in ENABLE ACCELERATOR SPECIFIC RESULTS field on panel DSNTIPBA.

APAR PH51892 introduced vector prefetch for SQL Data Insights and improves the BiF AI_SEMANTIC_CLUSTER. You must also go to Db2 13 FL504.

APAR PH55212 enhanced SQL Data Insights and added support of numeric data types to the BiF AI_ANALOGY.

Db2 13 FL500

This was the release that introduced the Db2 SQL Data Insights with the new BiFs AI_ANALOGY, AI_SEMANTIC_CLUSTER and AI_SIMILARITY.

Db2 13 FL504

A new AI BiF: AI_COMMONALITY was released and when you use LISTAGG you can now add an ORDER BY to the full select.

Db2 13 FL505

Another new BiF: INTERPRET which can change nearly any argument to nearly any other data type. The most useful thing you can do with it is something like:

INTERPRET(BX'0000000000B0370D' AS BIGINT)    --     11548429

So this is taking a hex RID and interpreting it as a BIGINT. This is very useful when you get RID problems with the LOAD utility, for example. You can then simply plug in the BIGINT value into a query like:

SELECT * FROM TABLE1 A WHERE RID(A) = 11548429;

And then you’ll find the bad guy(s) very easily!

Naturally, I will be keeping this newsletter up-to-date, as necessary.

Any questions or ideas, do not hesitate to drop me a line,

TTFN,

Roy Boxwell