2025-09 Poor performing SQL – A buyers guide

This month I wish to run through a bunch of, sadly, pretty common SQL coding mistakes that lot of beginners fall into. Not just humans either! AI is increasingly generating code snippets which developers simply cut-and-paste in the hope that it is:

  1. Correct SQL
  2. Does what they want it to do
  3. Runs acceptably fast!

In the Beginning was the SELECT

So, let us begin with the number 1 mistake all coders make at some time or other:

SELECT * from mytable ;

Yep, SELECT * FROM mytable. We have all done this, and in SPUFI it is the absolute norm! But if you are coding SQL that must run fast it is deadly… Why?

You are *never* alone in this world and that is doubly true of SQL running on big iron. Think buffer pools, think work space, think FTB etc. RAM is cheap these days, but if it is all being used it is no good to anyone else! The reason I mention space here, is when you code a SELECT * you are, pretty obviously, selecting every single column in the Table, View, MQT etc. This is pretty clearly going to cost you in CPU and I/O as Db2 must access every column in every row, format it, and return the value.

It gets worse, of course, as we live in a buffered world. All of this data is read and written into your precious little buffer pools and also is externalized into your darling little sort work spaces… You get the idea! Every column extra you add to the SELECT adds to the colossal amount of storage and cpu you are using. It gets even worse: If the optimizer sees this, it will sometimes, and pretty often, switch off using an index as it thinks “Well, I gotta get back every column value so an index might not really save cpu here!” Whereas a SELECT with just the three columns you really want, which also happen to be in an index, will then tell the Optimizer: Just scan this small, in comparison, index dataset! The savings can be vast.

It gets still even worse, of course… Coding a SELECT * in static or dynamic SQL will require a code change whenever you add or remove a column, as the cursor and/or program *is* aware of the columns and types of data being returned. Failure to do so is very bad news indeed!

Interesting little fact: Removing a column and then running with SELECT * will not actually cause a run time error (A negative SQLCODE), but it will cause you to possibly select garbage into fields. It is one of my pet bug bears that you can always have more columns on a FETCH than on a SELECT and Db2 does not warn or tell you!

Bad Index Usage

Using functions on columns tends to kill index access outright. If you can code around it – super! If you cannot then an IOE (Index On Expression) might be your only hope. No one likes IOEs though… The expression used must match 100% to the expression in the SQL and so for UPPER or LOWER it is not a problem but for SUBSTR(COL3 , 1 , 5) and SUBSTR(COL3 , 1 , 4) it will fail – and not tell you that the IOE was there but was not a 100% Match. EXPLAIN is naturally your friend here!

Missing Index Columns

Another absolute favorite of programmers is forgetting a join column… We have all done it, and so I am not going to throw the first stone here, but if you have two tables, both with four column indexes all with the same column names and/or functional content, then when you EXPLAIN and it joins with just one or two columns – Alarm bells should start ringing. Sometimes it must be like this but most times a JOIN / WHERE predicate has simply fallen under the table – These can be evil little problems, as sometimes the cardinality of the missing column is one so the returned data is all perfect… Nasty! Here our SQL PerformanceExpert for Db2 z/OS (SPX) software can really help out with the hunt!

Cartesian Join

If I had a Euro for every cartesian join I have seen in production I could retire! The major problem here, is sometimes you do not even see it, and sometimes the tables are so small the results are still OK. So these little problems fall under the radar until one day, that single row table you are mistakenly *not* joining to, grows up into 1,000,000 rows and then your little SQL just grinds to a halt, sucking the whole system down with it! These cartesian joins are sometimes caused by SQL changes where a programmer removes a JOIN to a table and accidentally deletes one row too many in the code. The syntax is still fine, the query might still run fine – especially in test – but then that little table grows… Boom! Here you need to use EXPLAIN, like in the index query, to see *exactly* how the Db2 Optimizer is actually joining all the tables. If you are using SYSIBM.SYSDUMMYx style tables, then you can sometimes want, and even expect, a cartesian join as you are 100% guaranteed to not ever have more than one row in that table! In all other cases, you had better make sure that there are correct JOIN … ON or WHERE criteria to avoid this pitfall.

Correlated Death

I read many years ago that “Correlated Queries are better and faster than Non-Correlated Queries unless they are not”. I always loved this advice as its right up there with “It Depends” in the list of non-helpful helpful advice! However, it has spawned an industry of correlated queries where programmers are 100% sure that writing SQL with correlated queries *all* the time is the absolute bee’s knees in performance. It isn’t! Here is a classic case:

SELECT C.CUSTOMER_ID, C.NAME
FROM CUSTOMER C
WHERE EXISTS (SELECT 1
              FROM ORDERS O
              WHERE O.CUSTOMER_ID = C.CUSTOMER_ID)
;

Please do not do this as the correlated query will be executed for *every* row in the CUSTOMER table, better could well be:

SELECT DISTINCT C.CUSTOMER_ID, C.NAME
FROM CUSTOMER C
   , ORDERS   O
WHERE O.CUSTOMER_ID = C.CUSTOMER_ID
;

Sharp eyes will have noticed the additional DISTINCT on the second query. Why? Well, we (I) do not know whether the chance of multiple rows joining on orders is wanted or not, probably not, so you must then add the DISTINCT to remove all the duplicates. This is actually one case where I would run both queries with EXPLAIN and in Production to compare side by side the results and CPU, I/O etc. and check that the join result was/is the better choice. Remember: It Depends! There are so many possible good reasons for a correlated join that it is nearly impossible to simply say “always bad or always good”. You must always test what you think might help as you might actually make things worse.

DISTINCTly a Bad Idea

I have so often seen queries like:

SELECT DISTINCT blah blah from blah
UNION
SELECT DISTINCT blah blah from blah
ORDER BY blah blah
;

Why do people do this still? Db2 is pretty clever but if you tell it to sort everything and remove duplicates and then you tell it again to do basically the same it will do it! In this case just remove the DISTINCTs and try and get the column order into the ORDER BY sequence so that the last sort is a tick faster – it will save you a ton of I/O and CPU. Remember: Only remove duplicates or sort if you really want to, but also remember: If you do not have an ORDER BY, the sequence of returned data is basically random! It might come back in beautiful sequential order just because of an index, and then tomorrow a different index could be used and POUF! Random order. Golden rule here: Use the minimum number of DISTINCTs and UNIONs and at least one ORDER BY if the order of returned rows is important. If the sort order is not important then do not add it just for the humans!

Bonus point: Some coders add DISTINCT because they get “duplicate” rows back from their query. Adding a DISTINCT fixes their problem, but all it actually does is hide it under a heap of CPU and I/O!

The Bachelor Problem – (Borrowed from Craig Mullins)

This is the failure to COMMIT. It might seem trivial at first but after you have output about 10,000 log records and you are up to over 1,000 row updates, it might be time to think about COMMITing all that work and freeing up all the rows of data and memory in the buffer pool you are blocking at the moment. Remember that COMMIT will cause you to lose Cursor position unless you have WITH HOLD defined on them and always remember to commit at Transaction boundaries – Never “half way through” a transaction for example! I also never let a subroutine issue a COMMIT – it must be the “main” program that controls COMMIT frequency. My rule of thumb here, is to COMMIT at 500 “transactions” that cause many more than 500 updates to the database. I also always use a variable that can be set at run time to raise or lower this frequency. Commits after ever update will kill you – Never Committing will also kill you – Pick a good middle path!

Too Much Data can hurt!

Following on from not COMMITing when you should, is the other cardinal sin of fetching the world to see if one row exists… I have really seen this query in *production*:

SELECT 1
FROM SYSIBM.SYSTABLES
;

This was being used as a “ping” to see if the z/OS Db2 sub-system was up and accepting work! Utterly crazy and a grand example where adding LIMIT 1 would be a great help but really, why run this query at all? These days the z/OS Db2 is *always* there and so the query works 99.999% of the time and the 0.001% where it fails, well, the world is ending anyways! Please review all queries to make sure they are only returning the rows you wish to actually process!

Einstein was Right!

He is famously quoted as saying “Insanity is doing the same thing over and over and expecting different results, like rebooting Windows PCs” – This has at least two or three areas where it hits the SQL Db2 world!

  1. Automation! If you do the same thing every morning every day – automate it!
  2. If you are doing COMMIT, ROLLBACK, ROLLBACK after ever transaction – Stop!
  3. If you are doing CLOSE CURSOR at CICS start of transaction – Stop!

The first point is a no-brainer. If you can write a little script, or even a program, that stops you wasting 30 minutes every day, it will add up very quickly! Remember the clever DBA gets the machine to do his/her work and not the other way around!

Number two is, sadly, a real life Hibernate problem that I discovered. The frame work, after a COMMIT, always issued two ROLLBACKs – I have mentioned before, that frameworks are all well and good but when you have *no* idea what is going on you have a problem brewing. In this particular case, the COMMIT and ROLLBACKs were not being traced, as they are not really SQL, so you had to look a bit deeper under the covers to see a crazy number of ROLLBACKs. Remember that Db2 is not the cleverest piece of silicon on the block … When it gets told ROLLBACK it dutifully saunters off and checks against the log to see if anything has been updated since the last COMMIT, and when you issue a ROLLBACK immediately after the last ROLLBACK Db2 goes off and checks again … This is a tremendous waste of CPU and I hope none of you out there “find” this little gem …

The third point was also found live in production. Way, way, way back when some bright CICS programmer had the brilliant idea to start every transaction with a CLOSE CURSOR in case the cursor had been left OPEN by the last transaction … We all know this cannot happen, right? Anyways, this genius got his/her way and *every* CICS transaction issued a CLOSE CURSOR and got an SQLCODE -501 “The cursor you are closing was not open” – surprise, surprise! Just think, if you had over 20,000,000 CICS transactions per day how much CPU just this tiny error handling path would cost you? Again, it is an SQLCODE that needs to be traced and they are not so easy to handle, but it is also doable. Here, check for how many weird or even “normal” SQLCODEs you get every day from all your running SQL – The contents can be quite scary!!!

No-one eats Stale Bread, do they?

But some people never care about Stale Statistics. If your STATSLASTTIME column in Real-Time Statistics (RTS) is over five years ago, and you have STATSINSERTS, STATSDELETES, STATSUPDATES or STATSMASSDELETES in the RTS, then you have to wonder, “Would a RUNSTATS possibly help me?”. Remember to always do a before/after check of the access paths – Here our Bind ImpactExpert for Db2 z/OS (BIX) can really help you.

Do you like Feedback?

Have you checked your SYSSTATFEEDBACK table to see what the Optimizer is crying out for? It is a gold mine of required data for your RUNSTATS, but beware, dear friends … It also holds the SQL query from Hell that the Management tried running once and got hopelessly wrong! In other words: review the contents, take a copy of the recommendations, delete them all and wait a while … If, after a couple of weeks, that row comes back into the SYSSTATFEEDBACK – BINGO! This is probably data that the Optimizer really needs and not a one-off query from hell! Here, our Real-Time DBAExpert for Db2 z/OS (RTDX) can really help you out!

Just the Facts, Ma’am

Well, I just wanted to run through a bunch of common errors and problems that I have seen around the world! I would love to hear from you if you have any other “Top Ten” style lists of evil things that have happened!

TTFN,

Roy Boxwell

2025-08 AI writes a “Db2 for z/OS Newsletter”

This month I thought I would let an AI do all the work… here is the result from the following prompt “write me a Db2 for z/OS newsletter about an interesting topic either to do with SQL or utility usage.”

Topic: Getting the Most from Temporal Tables in DB2 for z/OS

Why Temporal Tables Matter

In many industries—especially banking, insurance, and government—regulations require that you be able to reconstruct historical data exactly as it was at a given point in time. Prior to DB2 10 for z/OS, this was often done with custom history tables, triggers, and a lot of application code.

Today, temporal tables make this far simpler, with built-in DB2 features to maintain and query historical versions of data.

System-Period vs. Application-Period Temporal Tables

DB2 supports two main types of temporal tables:

  1. System-period temporal tables – DB2 automatically maintains historical versions of rows when changes occur. Two timestamp columns (ROW BEGIN and ROW END) and a TRANSACTION START ID are used to track validity.
  2. Application-period temporal tables – The application controls the validity period by providing its own start and end timestamps. Useful when the “effective” date of data is different from the transaction date.

You can also combine them into bi-temporal tables for the ultimate in time-based data tracking.

Basic Example: System-Period Temporal Table

CREATE TABLE CUSTOMER (
    CUST_ID         INTEGER NOT NULL,
    NAME            VARCHAR(100),
    STATUS          CHAR(1),
    ROW_BEGIN       TIMESTAMP(12) GENERATED ALWAYS AS ROW BEGIN NOT NULL,
    ROW_END         TIMESTAMP(12) GENERATED ALWAYS AS ROW END   NOT NULL,
    TRAN_START_ID   TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID 
                                                                NOT NULL,
    PERIOD SYSTEM_TIME (ROW_BEGIN, ROW_END)
) WITH SYSTEM VERSIONING;

Once the table is system-versioned, DB2 automatically creates and manages a history table. You can query historical data using the FOR SYSTEM_TIME clause:

SELECT *
FROM CUSTOMER
FOR SYSTEM_TIME AS OF TIMESTAMP('2024-07-01-10.00.00')
WHERE CUST_ID = 1001;

Performance Considerations

  • Indexing Matters: DB2 will maintain history indexes for you, but ensure they support your most common time-based predicates to avoid costly table scans.
  • Partitioning History Tables: If your history tables grow large, consider partitioning by date to keep queries efficient.
  • Utility Implications: REORG, COPY, and RUNSTATS operate on both the base and history tables. REORG with SHRLEVEL CHANGE can keep them online, but be mindful of increased elapsed time when history data is large.

Utility Tip: REORG and Temporal Tables

When running REORG on a temporal table, DB2 handles both the base and history table in a single execution if they are bound together. However, if you need to reorganize only the history table (for example, to improve access to older data without touching current production data), you can explicitly REORG just the history table.

Example:

REORG TABLESPACE HISTTS     <-- History table's tablespace
    SHRLEVEL CHANGE
    SORTDATA
    STATISTICS INDEX(ALL);

This can help reduce impact on current transactions while still maintaining query performance for historical reporting.

Best Practices

  • Use temporal queries rather than joining base and history tables manually; it ensures optimizer support and cleaner code.
  • Regularly RUNSTATS on both base and history tables to keep the optimizer informed about growth patterns.
  • For reporting, consider materialized query tables (MQTs) on top of temporal queries for faster access to common time-based reports.

Final Thought:

Temporal tables take much of the heavy lifting out of managing historical data in DB2 for z/OS. When paired with smart indexing, partitioning, and utility usage, they can deliver both compliance and performance—without the spaghetti code of the old days.

Maintaining Large Temporal Table Environments in DB2 for z/OS

(Utility Scheduling & Best Practices)

When you start using temporal tables—especially system-period or bi-temporal—your history tables grow quickly. This can cause issues if utilities aren’t tuned for their unique workload patterns.

Here’s a real-world maintenance workflow you can adapt.

1. Understand the Data Growth Pattern

  • Base table – Moderate growth, mainly current records.
  • History table – Potentially huge growth, as every UPDATE or DELETE adds rows.
  • Data skew – History tables are often append-heavy with few deletes.

Before setting schedules, run:

SELECT COUNT(*), MIN(ROW_BEGIN), MAX(ROW_END)
FROM HISTORY_TABLE;

This gives you volume and time-span, helping you plan partition-level REORGs.

2. Recommended Utility Workflow

Nightly (or per batch cycle):

  • RUNSTATS on base table only (small, keeps optimizer fresh for OLTP).
RUNSTATS TABLESPACE BASETS TABLE(ALL) INDEX(ALL) UPDATE ALL;

Weekly:

  • REORG history table partitions with recent activity.
    • Use PART to target only active partitions.
    • SHRLEVEL CHANGE to keep reporting jobs running.
REORG TABLESPACE HISTTS PART 15:17
    SHRLEVEL CHANGE
    SORTDATA
    STATISTICS INDEX(ALL);

Monthly:

  • COPY both base and history tables for backup compliance.
COPY TABLESPACE BASETS FULL YES
COPY TABLESPACE HISTTS FULL YES

Quarterly:

  • REORG full history table if query performance declines or RUNSTATS indicate high clustering ratio loss.
  • RUNSTATS on full history table.
  • Consider MERGECOPY to consolidate image copies.

3. Tips to Keep Utilities Efficient

  • Use LISTDEF to dynamically select only affected partitions:
LISTDEF HISTPARTS INCLUDE TABLESPACE HISTTS
    PARTLEVEL
    PARTCOND (LASTUSED > CURRENT DATE - 7 DAYS);

  • Parallelism – Large history REORG jobs benefit from PARALLEL keyword if CPU and I/O capacity allow.
  • Inline Stats – For history tables, you can take inline stats during REORG to avoid a separate RUNSTATS step.
  • Automation – Many shops integrate this with DSNACCOX to decide when REORG or RUNSTATS are needed.

4. Example End-to-End Utility JCL Snippet

//STEP1  EXEC DSNUPROC,SYSTEM=DSN1,UID='HIST.REORG'
//SYSIN    DD *
  LISTDEF HIST_PARTS INCLUDE TABLESPACE HISTTS
    PARTLEVEL
    PARTCOND (LASTUSED > CURRENT DATE - 7 DAYS)
  REORG LISTDEF HIST_PARTS
    SHRLEVEL CHANGE
    SORTDATA
    STATISTICS INDEX(ALL) UPDATE ALL
/*
//STEP2  EXEC DSNUPROC,SYSTEM=DSN1,UID='HIST.COPY'
//SYSIN    DD *
  COPY TABLESPACE HISTTS FULL YES
/*
//STEP3  EXEC DSNUPROC,SYSTEM=DSN1,UID='BASE.RUNSTATS'
//SYSIN    DD *
  RUNSTATS TABLESPACE BASETS TABLE(ALL) INDEX(ALL) UPDATE ALL
/*

5. Key Takeaways

  • Don’t treat history tables like “just another table”—their size and access patterns demand dedicated utility schedules.
  • Use partition-level maintenance wherever possible to cut elapsed time.
  • Integrate RUNSTATS and REORG with automation tools to keep the optimizer happy without manual intervention.

Whaddya Think?

Well, I do not think I will unemployed anytime soon based on this, although, at first glance, it is not that bad. Then you look at the details and it starts to all go horribly wrong…

  • Table creation does not know of the keyword SYSTEM VERSIONING, this hints to the fact that the HIST table is not actually automatically created either!

The example SELECT SQL naturally dies with a:

DSNT408I SQLCODE = -20524, ERROR:  INVALID PERIOD SPECIFICATION OR PERIOD
         CLAUSE FOR PERIOD SYSTEM_TIME. REASON CODE = 02

until you manually create the history table and issue the required ALTER:

CREATE TABLE HIST_CUSTOMER (                       
 CUST_ID       INTEGER NOT NULL,                    
 NAME          VARCHAR(100),                       
 STATUS        CHAR(1),                            
 ROW_BEGIN     TIMESTAMP(12) NOT NULL,             
 ROW_END       TIMESTAMP(12) NOT NULL,             
 TRAN_START_ID TIMESTAMP(12) NOT NULL              
)
;                                                 
COMMIT 
;                                           
ALTER TABLE CUSTOMER                               
  ADD VERSIONING USE HISTORY TABLE HIST_CUSTOMER 
;
  • LISTDEF does not support conditional partitions (PARTCOND) although the idea is pretty cool.
  • Utility statements do not end in a semi-colon.
  • Finally, and the very worst mistake of all: seven uses of the capital B! The audacity!!!

What About You?

Have any of you had “interesting” AI suggestions in the world of Db2 for z/OS? I would love to hear from you!

TTFN,

Roy Boxwell

2025-07 zIIPing along!

This month I wish to spend a bit of time delving into the zIIP processor(s) that, hopefully, you all have available to use “for free”. Naturally, they cost a few thousand bucks a piece, but their usage is not in the rolling four-hour average and so basically free. Of course, if you have a different price model where CPU usage is all-inclusive then the use of these handy little beasts is moot!

What Is It?

They were first introduced in 2006 with the IBM System z9 processor complex. Their full name is System z Integrated Information Processor normally shortened to “zIIP”. They followed on and took over from the earlier zAAP, that was used for Java, and the IFL, which was for Linux and z/VM. Originally, they were just for Db2 workloads but nowadays quite a lot of non-Db2 work is zIIP eligible.

Eligible?

Yep, the wording is important! The fact that some function or code etc. is able to be run on a zIIP does not mean it *will* run on a zIIP. They are, after all, processors and when they are all busy, your workload will just trundle on by using the rest of the normal CPs (Central Processors) you have.

How Many?

It started out nice and easy… You could not have more zIIPs than you have CPs in your plex. So a 1:1 ratio. Then along came the zEC12 and it changed the ratio to be not more than 2:1 Nowadays, with the z16, IBM have thrown in the towel and announced there is no limit anymore!

When They Arrived

The first Db2 to exploit the zIIP was the “big change” version DB2 V8 when everything went UNICODE and long column on us all!

What have They Done for Us?

From the get go, any TCP/IP based remote accessing SQL was eligible for offload to zIIP. This was a very very good thing indeed and saved people mega-bucks. Parallel query child processes under a dependent enclave SRB, or independent enclave SRB if coming from TCP/IP, also got zIIP support and some utility processes, (Index build for LOAD, REORG and REBUILD INDEX, a portion of index build under a dependent enclave SRB and also a portion of sorting).

Sorry, SRB What?

You might have noticed a TLA (Three Letter Abbreviation) “SRB” occurring a lot in that text! So, what is an SRB and why is it so important? On mainframes, all work is run under two kinds of control blocks: Task and service request blocks. Normally user programs, and system programs, use a Task Control Block, (the TCB that we all know and love) and all run on normal CPs not zIIPs! The Service Request Block (SRB) however, is for system service routines. They are initiated by a TCB to do special stuff and to start them it is called “scheduling an SRB”. To do this, your program must be running in a higher authorized state called “supervisor state”. SRBs run parallel to the TCB task that scheduled them and they cannot own storage but can use the storage of the TCB. Only these SRBs are eligible to be offloaded to a zIIP.

And Then?

Well, when IBM brought out the z13 they merged the zAAP support onto the zIIP and since then, the general direction has been: If a task is an SRB then it *can* be made zIIP eligible. This has meant that there has been a gradual increase in Vendor take-on and IBM usage for these “helping hands”.

What about Db2 Usage?

In DB2 V9 they announced the actual, up until now hidden, limits of use. For TCP/IP remote SQL 60% offload, for Parallel queries 80% offload, Utilities up to 100% offload and, brand new, XML also up to 100% offload!

DB2 10

RUNSTATS were added, but *not* the distributed statistics and inline statistics parts, and Db2 buffer pools got 100% offload for prefetch and deferred write processing.

Db2 11

Not only did the B go lower case, but RUNSTATS got column group distribution statistic processing, and System Agent processing got up to 100% offload when running under enclave SRBs but not p-lock negotiation. This included page set castout, log read, log write, pseudo index-delete and XML multi version document cleanout.

It was also here, when they created the zIIP “needs help” function when a delay occurs. This is controlled by the z/OS parameter IIPHONORPRIORITY YES/NO setting. YES is the default and tells a stalled zIIP to shunt the work to a CP. That might, or might not, be a good idea depending on your cost or time SLAs.

Db2 12

This brought RESTful support at 60% offload, Parallel went up to 100% offload and RUNSTATS also went to 100% offload.

Db2 13

All Db2 SQL AI functions went straight to 100% offload and the COPY utility got a 100% offload but only in the COPYR phase.

COPY now in Db2 13 FL507

I recently kicked my little test Db2 up to Db2 13 FL507 and then waited a couple of days to see the zIIP usage that COPY just got. We were informed it was just in the COPYR subphase of the Utility. I use the SYSUTILITIES table to track everything, so I wrote a little SQL that lists out all the Utilities, Counts, CPU, zIIP and Elapsed.

Here’s the SQL splitting with/without zIIP usage:

SELECT SUBSTR(NAME , 1 , 18)                              AS UTILITY
      ,COUNT(*)                                           AS COUNT 
      ,(SUM(CPUTIME)     * 1.000) / 1000000               AS CPU   
      ,(SUM(ZIIPTIME)    * 1.000) / 1000000               AS ZIIP  
      ,((SUM(CPUTIME) + SUM(ZIIPTIME)) * 1.000) / 1000000 AS TOTAL 
      ,(SUM(ELAPSEDTIME) * 1.000) / 1000000               AS ELAPSED
FROM SYSIBM.SYSUTILITIES                                           
WHERE ZIIPTIME > 0                                                 
GROUP BY NAME                                                      
UNION ALL                                                          
SELECT SUBSTR(NAME , 1 , 18)                              AS UTILITY
      ,COUNT(*)                                           AS COUNT 
      ,(SUM(CPUTIME)     * 1.000) / 1000000               AS CPU   
      ,(SUM(ZIIPTIME)    * 1.000) / 1000000               AS ZIIP  
      ,((SUM(CPUTIME) + SUM(ZIIPTIME)) * 1.000) / 1000000 AS TOTAL 
      ,(SUM(ELAPSEDTIME) * 1.000) / 1000000               AS ELAPSED
FROM SYSIBM.SYSUTILITIES                                            
WHERE ZIIPTIME = 0                                                 
GROUP BY NAME                                                      
ORDER BY 1 , 2                                                     
FOR FETCH ONLY                                                      
WITH UR                                                            
;                                                                  

Here is my output:

---------+------------+----------+--------+----------+-----------+-
UTILITY            COUNT        CPU     ZIIP      TOTAL     ELAPSED
---------+------------+----------+--------+----------+-----------+-
CATMAINT               5       .282     .000       .282       3.477
COPY                 925     11.673    4.907     16.581     914.838
COPY               60471   1017.939     .000   1017.939   65126.853
LOAD                   2       .005     .000       .005        .012
LOAD                 802     17.453    3.852     21.306    1990.150
MODIFY RECOVERY    59128    391.163     .000    391.163   15461.098
MODIFY STATISTICS     47       .120     .000       .120       1.276
QUIESCE               10       .015     .000       .015        .156
REBUILD INDEX          3       .027     .000       .027        .797
REBUILD INDEX          9       .082     .002       .085       2.502
RECOVER                9       .047     .000       .047       1.009
REORG                  4       .022     .000       .022       1.942
REORG                 28      2.075     .427      2.503     178.284
REPORT RECOVERY        3       .059     .000       .059        .454
RUNSTATS              33       .096     .000       .096       4.695
RUNSTATS            3575     44.477   92.323    136.801    1182.851
UNLOAD              1688    129.379     .000    129.379     989.501
DSNE610I NUMBER OF ROWS DISPLAYED IS 17

Here, you can see which utilities are zIIP enabled and how much the zIIPs saves us. The new kid on the block, COPY, actually saves us about 30% which is *not* to be sneezed at!

Checking in Batch

I add the “hidden” parameter STATSLVL(SUBPROCESS) to all my Utilities so that it outputs more info as I am a nerd and love more data! The numbers never all add up and so you must be a little careful, but here’s an example Image Copy JCL with output showing the counters and details:

//ICU005   EXEC PGM=DSNUTILB,REGION=32M,                
//     PARM=(DD10,'DD1DBCO0ICU005',,STATSLVL(SUBPROCESS))
//STEPLIB  DD DISP=SHR,DSN=DSND1A.SDSNEXIT.DD10         
//         DD DISP=SHR,DSN=DSND1A.SDSNLOAD              
//DSSPRINT DD SYSOUT=*                                  
//* THRESHOLD REQUEST DB2CAT REQUEST                     
//SYSIN    DD *                                         
 COPY TABLESPACE DSNDB01.SYSSPUXA                       
    COPYDDN (SYSC1001)                                  
    FULL YES SHRLEVEL CHANGE                            
//SYSC1001 DD DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,       
//            SPACE=(1,(352,352),RLSE),AVGREC=M,        
//            DSN=COPY.DD10.DSNDB01.SYSSPUXA.P0000.D25195
//SYSPRINT DD SYSOUT=*

Normal output:

DSNU000I    195 07:45:39.89 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DD1DBCO0ICU005                          
DSNU1044I   195 07:45:39.90 DSNUGTIS - PROCESSING SYSIN AS EBCDIC          
DSNU050I    195 07:45:39.90 DSNUGUTC -  COPY TABLESPACE DSNDB01.SYSSPUXA COPYDDN(SYSC1001) FULL YES SHRLEVEL CHANGE
DSNU3031I -DD10 195 07:45:39.91 DSNUHUTL - UTILITY HISTORY COLLECTION IS ACTIVE. 
                      LEVEL: OBJECT, EVENTID: 238604 
DSNU3033I -DD10 195 07:45:39.92 DSNUHOBJ - SYSIBM.SYSOBJEVENTS ROWS INSERTED FOR OBJECT-LEVEL HISTORY             
DSNU400I    195 07:46:23.09 DSNUBBID - COPY PROCESSED FOR TABLESPACE DSNDB01.SYSSPUXA
                      NUMBER OF PAGES=222109
                      AVERAGE PERCENT FREE SPACE PER PAGE =  2.75
                      PERCENT OF CHANGED PAGES =  0.00
                      ELAPSED TIME=00:00:43   
DSNU428I    195 07:46:23.09 DSNUBBID - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DSNDB01.SYSSPUXA

Then the extra stuff, sorry about the formatting but WordPress is not good for batch output:

----------------------------------------------------------------------------------------------------------
                                          U T I L I T Y   S T A T I S T I C S                             
                                                                                                          
INTERVAL = UTILITY HISTORY        CPU (SEC)  = 0.000288          ZIIP = 0.000000              
   LEVEL = UTILINIT SUBPROCESS    ELAPSED TIME (SEC) = 0.000                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 6            2                                                                      
  BP32K               2            1                                                                      
                                                                                                          
  TOTAL               8            3                                                                      
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = UTILITY HISTORY        CPU (SEC)  = 0.000091          ZIIP = 0.000000              
   LEVEL = UTILINIT SUBPROCESS    ELAPSED TIME (SEC) = 0.000                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 3                                                                                   
  BP32K               1            1                                                                      
                                                                                                          
  TOTAL               4            1                                                                      
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = OBJECT-LEVEL HISTORY   CPU (SEC)  = 0.000147          ZIIP = 0.000000              
   LEVEL = UTILINIT SUBPROCESS    ELAPSED TIME (SEC) = 0.001                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 6            2             1                                                        
  BP32K               2            1                                                                      
                                                                                                          
  TOTAL               8            3             1                                                        
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = UTILINIT               CPU (SEC)  = 0.002101          ZIIP = 0.000000              
   LEVEL = PHASE                  ELAPSED TIME (SEC) = 0.021                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                48            6             2                                                        
  BP32K               9            5                                                                     1
  BP32K               5            3                                                                      
  TOTAL              62           14             2                                                       1
                                                                                                          
  DDNAME        DS OPEN     DS CLOSE      READ I/O     WRITE I/O    I/O CHECKS      I/O WAIT    END OF VOL
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  SYSPRINT                                                     3             3                            
                                                                                                          
  TOTAL                                                        3             3                            
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPYRDN0001      "     CPU (SEC)  = 0.008764          ZIIP = 0.273090              
   LEVEL = SUBPHASE               ELAPSED TIME (SEC) = 43.033                                             
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0            223486            3          2713             1          4982                            
  BP32K              73           50                                                                     1
  BP32K               2            2                                                                      
                                                                                                          
  TOTAL          223561           55          2713             1          4982                           1
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPYWDN0001      "     CPU (SEC)  = 0.357434          ZIIP = 0.000000              
   LEVEL = SUBPHASE               ELAPSED TIME (SEC) = 43.032                                             
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = Pipe Statistics                                                                                
    TYPE = COPY Data Pipe000                                                                              
                                                                                                          
  Records in:                                     222,110                                                 
  Records out:                                    222,110                                                 
  Waits on full pipe:                                 360                                                 
  Waits on empty pipe:                                  0                                                 
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPY                   CPU (SEC)  = 0.004909          ZIIP = 0.000000              
   LEVEL = PHASE                  ELAPSED TIME (SEC) = 43.167                                             
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                22            7             1                                                        
  BP32K               9            8                                                                      
                                                                                                          
  TOTAL              31           15             1                                                        
                                                                                                          
  DDNAME        DS OPEN     DS CLOSE      READ I/O     WRITE I/O    I/O CHECKS      I/O WAIT    END OF VOL
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  SYSPRINT                                                     4             4                            
  SYSC1001            1            1                       37022         37022        38.793             1
                                                                                                          
  TOTAL               1            1                       37026         37026        38.793             1
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = UTILTERM               CPU (SEC)  = 0.000150          ZIIP = 0.000000              
   LEVEL = PHASE                  ELAPSED TIME (SEC) = 0.002                                              
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                 2                                                                                   
  BP32K               2            1                                                                      
                                                                                                          
  TOTAL               4            1                                                                      
                                                                                                          
----------------------------------------------------------------------------------------------------------
INTERVAL = COPY                   CPU (SEC)  = 0.373401          ZIIP = 0.273090              
   LEVEL = UTILITY                ELAPSED TIME (SEC) = 43.191                                             
                                                                                                          
  BUF POOL     GETPAGES     SYS SETW    SYNC READS    SYNC WRITE    SEQ PREFCH   LIST PREFCH    DYN PREFCH
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  BP0                72           13             3                                                        
  BP32K              20           14                                                                     1
  BP32K               5            3                                                                      
                                                                                                          
  TOTAL              97           30             3                                                       1
                                                                                                          
  DDNAME        DS OPEN     DS CLOSE      READ I/O     WRITE I/O    I/O CHECKS      I/O WAIT    END OF VOL
  --------   ----------   ----------    ----------    ----------    ----------   -----------    ----------
  SYSPRINT                                                     7             7                            
  SYSC1001            1            1                       37022         37022        38.793             1
                                                                                                          
  TOTAL               1            1                       37029         37029        38.793             1
                                                                                                          
----------------------------------------------------------------------------------------------------------

Here, you can easily see that the only phase with any zIIP values is the INTERVAL = COPYRDN0001 which is obviously the COPYR IBM mentioned, so that is 100% correct!

Bells and Whistles?

Here’s a complete list of all the z/OS parameters of interest in the IEAOPTxx dataset that are to do with zIIPs:

  1. IIPHONORPRIORITY=xxx – Default is YES.
  2. PROJECTCPU=xxx – Default is NO. Whether to report possible zIIP / zAAP offload data.
  3. HIPERDISPATCH=xxx – Default is YES.
  4. MT_ZIIP_MODE=n – Default is 1. This is the multithreading flag, changing it to 2 enables zIIP multithreading and, sort of, doubles your number of zIIPs.
  5. CCCAWMT=nnnn – Default is 3200 (3.2 ms). This is the time z/OS waits before waking up idle CPs or zIIPs.
  6. ZIIPAWMT=nnnn – Default is 3200 (3.2ms). This is the time to wait before a busy zIIP asks for help (IIPHONORPRIORITY=YES)

All of these parameters, with the possible exception of MT_ZIIP_MODE, should normally be left at their default values unless you really know what you are doing and what is running on your system! The multithreading parameter is naturally only supported on systems where the zIIP can go multithreading.

Anything Else?

Security encryption can run on zIIP and, as everything is encrypted these days, it can be a very good idea. XML usage in COBOL, PL/I, Java, CICS, IMS etc., and Machine learning with ONNX. Python AI and ML workloads are 70% eligible. System Recovery Boost z15 and above and for SVC dumps z16 and above. The Java workload within z/OSMF can go up to 95% offload according to IBM internal tests. With the ratio of zIIP to CP now gone the sky is basically the limit!

Saving the Day?

What are zIIPs saving your firm? I would love to get a screen shot of the output of that SQL!

TTFN

Roy Boxwell

2025-06 IDUG NA 2025

This month I wish to run through the IDUG NA 2025, not every presentation but the ones I attended or held. The IDUG was a very good one indeed, I thought! My colleague Ulf and I got off to a bad start when we missed our flight connection in Amsterdam due to bad weather but KLM got us on a flight to Boston, so we arrived in Atlanta only six hours later than planned… Ho hum! Such are the banes of modern life!

Db2 for z or Db2 for LUW?

You might well have heard that the IDUG have moved away from a pure z/OS or LUW style “tracks” system and now do a more named approach which *still* catches people out, as they assume that everything in the second column is just for LUW – Not true dear readers! Anyways, it *always* pays to read through the whole grid before planning your 10,000 steps per day timetable!

Keynote

The opening keynote “Leveraging your Db2 Data for Enterprise AI” from two IBM VPs: Minaz Merali and Priya Srinivasan, was a very good one and well attended, we just managed to finish getting our booth ready in time, as it was “news to us” that the Expo was also where the Keynotes were going to be held all week!

It starts!

The technical sessions then kicked off with a “Spotlight” session from Haakon Roberts doing his excellent “Trends and Directions” as a double header with Akiko Hoshikawa as well. It was listed as session A1 in the grid but then IDUG said it was S1 – which caused some confusion when filling in the reviews! Anyways, I really enjoyed it, especially the interaction with the audience, as they all suddenly realized that in a few short years several, or maybe lots, of their objects will be unsupported… For you, dear readers, just download and run my good old Migration HealthCheck for Db2 z/OS to see how many of the evil beasts, that IBM are now officially killing off, you still have lounging around in production! We all have time, lots of time, to “fix” these problems – Don’t panic! Even IBM must do some work to finally get rid of Simple, Multi-table and non-UTS tablespaces in the Directory and Catalog!!! But start planning and checking now… forewarned is forearmed!

Performance Review [access @ IDUG]*

Then came A2 from Akiko Hoshikawa with the “Key Performance Updates” session – again an excellent session, with the great tip around DSMAX : you should be extremely careful about having a very large number of open datasets, especially indexes, as the Root Page is always pinned in the buffer pool! So, if you have 100,000 open indexes you can imagine how bad your buffer pool(s) will look like! Secondary problem is actually the time it takes to close all these datasets at Db2 shut down… Db2 does not actually care and passed the buck to z/OS to do it all!

RUNSTATS & Monitoring [access @ IDUG]*

Then I held my first session: C3 “RUNSTATS Master – reloaded ” if you want to learn waaaay more than you should about RUNSTATS feel free to also download and run our Statistics HealthCheck for Db2 z/OS. Then I popped over to see the Tom Glaser session E5 “Don’t have an SQL monitor? You might need a bigger shovel” where he drilled down into the nuts-and-bolts of what you must/should monitor and showed which metrics are useful for tuning your systems.

Keynote – Treasure

Tuesday began with another good key note session from Greg Lotko, a Senior VP at Broadcom. All about Pathfinders and finding treasure – extremely entertaining, I thought!

Utilities Review [access @ IDUG]*

Later the sessions started and I joined Ka Chun Ng for his “Db2 for z/OS Utilities” session as moderator… Here, some technical problems raised their ugly heads, and we had bad audio/visual issues which delayed the start by about ten minutes. This meant Ka Chun could not finish his presentation. This was a real shame, as it is crammed full of great stuff for us Utility nerds out there! He even updated me about an error I had in my RUNSTATS presentation – fantastic! Top things here, were the zIIP offload for COPY – Not much CPU is actually offloaded but how *many* image copies do you run every day?? REGION=0M is the best for utils, but we can never use that, can we? He pointed out that utilities are capped to 1.6GB – now you can allocate a correct REGION size without breaking your firm’s internal standards. Slide 19 was a useful reference for large REORGs, that we must all do at some point, to finally get to PBR RPN tablespaces. He also mentioned one of my favorite bug-bears as well -> REORG SYSLGRNX regularly with MODIFY RECOVERY – This shrinks its size dramatically and really improves over-all system performance in a major knock-on effect! Loads of people either do not know this or just do not bother!

Hack Attack? [access @ IDUG]*

A quick stroll back through the rabbit warren of rooms and corridors then brought me to F7: “How to Hack Db2 for z/OS” by Emil Kotrc – Have no fear friends, there is nothing here that will let hackers into your system like a zero-day style hack, but it is a full list of possible vectors that should be a) known about and b) discussed. Biggest take aways -> Check your access permissions to APF Authorized load libraries and sanitize your dynamic SQL input!

Top Ten Lists [access @ IDUG]*

After lunch I moderated Craig Mullins’ D8: “My All-Time Db2 Top Ten lists” which won the best user presentation award! Full of great info and great fun to see/hear. You always learn stuff at Craig’s presentations!

Security! [access @ IDUG]*

Next up was F9 from Gaya Chandran: “Modernizing your Security posture around Db2 z/OS data” which rang bells with me in my Auditor role. Slide nine was the biggest winner for me… And then she reviewed all the new/old/nice security things we have on Db2 for z/OS that must simply be reviewed and/or used… It could make your world much better!

Keynote – Go take a Hike!

Wednesday started with another great keynote from Jennifer Pharr Davis. I had a chat with her at our booth before she started and she was really interested in Db2 and the whole ecosystem. She is, what I would call, an extreme hiker! She has walked the Appalachian Trail (Spans 14 States and nearly 2,200 Miles/3,500 km) three times. Madness, I would say, but from these experiences she learned a lot about resilience and adaptability!

Profile Tables! [access @ IDUG]*

Then into session D10 with Scott Walker and Gaya Chandran: “Db2 z/OS 13 – using Profiles to monitor/block unsecure TCP/IP connectivity” we learned how Scott set up, ran, monitored, and updated all his Db2 systems to go from insecure to secure TCP/IP port usage in a well-planned and documented style. This highlighted the usefulness of PROFILE tables and proves again that this is one of the best features ever in Db2 for z/OS, in my opinion!

DORA & PCI DSS [access @ IDUG]*

Then I was up again with D11: “Isn’t she aDORAble?” all about Audit, Compliance, Resilience and how much we need to change into “internal auditors” before a “lead overseer” comes along and makes your life horrible! Feel free to use this presentation at your site to hammer home how much it will cost if you do not start doing stuff now! Due diligence… Try out our freeware SecurityAudit Health Check for Db2 z/OS.

Time to go!

That was it for me – Had to leave in the middle of lunch to get my flight back home! One last word must be mentioned about the food – Fantastic! The lunch and coffee break beverages and food were simply great! My personal favorite was when I was at the dessert table and saw “Mexican Chocolate Cheesecake” and wondered out loud to a random guy next to me “I didn’t know that Mexicans made chocolate cheesecake” he replied “The funny thing is nor did I – and I am Mexican!” Made my day!

and finally…

My colleague Ulf also got to hold a session: B15 “Understand, Manage and Love Certificates in z/OS and USS” [access @ IDUG]* on Thursday which was all about the “brave new” world of certificates and key-rings and how you cannot afford to ignore them anymore! All went down splendidly!

TTFN,

Roy Boxwell

If you attended, or it is two/three years later 🙂 , you can access all of the presentations here [access @ IDUG]*.

At the time of writing the A1/S1 presentation from Haakon is sadly not available…

Note: * To get the links to work you must be registered at IDUG *and* entitled to the content. Otherwise you will get an “Oops!” screen popping up tell you that you lack authorization! If that is the case you can then think about buying the “Premium” version of IDUG membership which does allow downloading of conference files without having to wait around two years.

2025-05 Central Canada Db2 Users Group (CCDUG) 2025 review

This month I wish to give a quick review and roundup of the Central Canada Db2 Users Group (CCDUG) meeting 2025. It was held in a brand-new location this year, at the IBM Toronto development labs – We were all wondering if it could be anywhere near as good as the “old” BMO Institute for Learning. Spoiler alert: it was!

Who are those two reprobates sat there??? 🙂

Tracks Tracks Tracks plus a New Day!

This year, the CCDUG had a free bonus “first” day dedicated to IMS: “Intro to IMS Day”. This was a deep dive into everything HDAM’my, PSB’y and DBD’y in the world of the one true hierarchical database!

Back to Normal?

The keynote, from Greg Lotko, started, and I must admit I was impressed! I go to so many conferences, IDUGs and Seminars etc. that keynotes tend to let me down… Not this time: It was an interesting, engaging and, quite frankly, the best keynote I have seen!

Now Back to Normal!

The two-day conference was split into four separate tracks:

  • Db2 for z/OS (I spent most of my time here, of course!)
  • Db2 LUW
  • IMS
  • AppDev which was cross-platform and database!

Here’s a link to the presentation grid download page overview where you can check out abstracts etc.

The actual presentations are also available here, so you can grab copies for your good selves,

The Conference

There were a few last-minute grid changes due to cancelled flights etc. So, what you see on the grid is not actually 100% what was presented, but it was really just swapping presentations and or presenters! I apologize now if I have missed anyone out that thinks they should be included but even I make mistakes these days!!!

Personal Faves

ZOS-01 Db2 13 for z/OS Latest Features with Mark and Tori rattling through all the new and cool stuff in Db213. I really like Mark Rader, Tori Felt and Anna McKee – They have a great synergy together and know a ton of stuff. So, anything presented by them is simply a must have!

ZOS-02 Universal Tablespace update as of V13 with David Simpson was very good indeed including tips and tricks about finally getting to an all-UTS world which is important in ZOS-06!

ZOS-04 Audit your Db2 for z/OS – Isn‘t she aDORAble!! by yours truly, Roy Boxwell: A breathtaking run through the world of legal double-speak and corporate hell. Voted by one attendee as the scariest thing he has seen is his life and by another as simply “the horror, the horror”…

ZOS-05 Db2 for z/OS Health Check Topics with Mark and Tori, this time going through the results of the IBM 360-degree health checks that they do at customer sites and reporting on common problems that they repeatedly find.

ZOS-06 The Db2 for z/OS Catalog and Directory The Past, Present, and Future. John Lyle talking about the Db2 Catalog and Directory including the fact that migration to Db2 Vnext will *not* be possible if you have any non-UTS tablespaces (ignore LOB and XML of course!). I asked about DEFINE NO and he confirmed that a DEFINE NO non UTS will stop the migration. I then opened an Aha Idea as I think this could cause grief in a lot of sites… Please go to Aha Ideas and vote!

DB24ZOS-I-1752   Support REORG for DEFINE NO

I have heard, in various IBM presentations (Haakon at IDUG Valencia, John Lyle at CCDUG) talk about Db2 Vnext, the statement “no migration to Vnext if you have 6 byte RBA/LRSN or non-UTS tablespaces”. This if fine unless you happen to have old DEFINE NO objects. You cannot REORG these to action the ALTER command as reorg is disallowed on DEFINE NO. We cannot simply INSERT a row into every DEFINE NO as this creates possibly 100’s of objects which, by definition, are not really in use (Yes, they can be SELECTed from !!). Reverse engineer the DDL, DROP and reCREATE, reGRANT and BIND is just a crazy amount of effort.

My idea is simply to allow REORG on DEFINE NO objects which would just set the correct bits in the catalog so that when it would be created it would then create a UTS space with MAXPARTITIONS 1 DSSIZE 64GB which is 100% ok.

ZOS-07 Db2 for z/OS and Db2 Administration Foundation: An Installation Experience, as well as the longest title of the conference(!) was a real-world report of how to get this tool installed and up and working with Zowe. We all know we must Zowe right? Remember that Data Studio has officially been declared dead and is now buried as well!

AppDev-03 Db2 for z/OS 101: This or That? Tools to interact with Db2 for z/OS! This was Anna and Mark doing a double-header of lists and lists of different tooling for different business cases and whether they are free – pardon me “non charge items” – or not!

AppDev-06 Exploit Certificates and eliminate tiresome password pains in z/OS and USS. From Ulf Heinrich shows you how to go painlessly into the brave new world of certificates on the mainframe. Required reading, especially when going to Zowe!

Fill in Your Form, Please!

As at every conference you had to submit your Conference review form at the end where I happily plugged for a COBOL special day next year too and, perhaps, a bit less garlic in the chicken pasta!

That’s all, Folks!

We then had to leave early to catch our flight back – which then got a delayed take-off and, due to huge queues at passport control *in Germany*, we then missed our connecting flight to Düsseldorf… The nice ladies at our check-in desk then gave us both a €7 voucher and we got some train tickets for the high-speed ICE Sprinter instead. Flying along at 300kmh while at ground level is quite impressive!

Hope to see some of you at the IDUG NA 2025 in Atlanta, from Sunday 8th June until Thursday the 12th! If you cannot make it to Atlanta don’t worry – feel free to pop on over to Düsseldorf for the IDUG EMEA 2025 from Sunday 26th October until Thursday the 30th! Maybe you can also take the high-speed train too?

TTFN

Roy Boxwell

2025-04 COMPRESS you like?

This month, I wish to do a brief overview of the pros and cons of using COMPRESS YES at the tablespace and index level.

Starting with Tablespaces

Compression came in with DB2 V3 and required a special processor for the CMPSC instruction to work at any really useful speed. Nothing changed for a loooooong time until DB2 V9 when XML compression was added. Then, in DB2 10, “partial compression” came. This includes building the dictionary after 1.2MB data has been inserted by the LOAD utility and afterwards, in Db2 12 FL504, we got a brand-new compression routine, only for UTS, called “Huffman” which also renamed the “old” routine to “fixed length”. Then, in FL509, Huffman got fully externalized into the Db2 Catalog. Full utility support was also introduced. All of these methods require the creation, and maintenance, of a dictionary, where Db2 can look up the byte codes/words that actually do the compression.

A new ZPARM as well

Naturally, to specify the default compression routine in use, a new ZPARM appeared: “TS_COMPRESSION_TYPE” – If set to HUFFMAN, then you could simply issue an ALTER like: ALTER TABLESPACE ROY.BOY COMPRESS YES, but if not set to Huffman, then the ALTER must be specific: ALTER TABLESPACE ROY.BOY COMPRESS YES HUFFMAN to get the new method.

Indexes

Indexes came in DB2 V9 and is not really the same as tablespace compression at all! Firstly, there is no dictionary as it is all handled in the Db2 I/O engine. It is a “prefix” compression really and forces a larger index bufferpool size to get it done – this has benefits and costs, of course. The major difference is that the indexes are always compressed on disk down to 4k page size and expanded in the bufferpool up to the 8K, 16K or 32K size you have given them. The other thing to remember, is that index compression is only for leaf pages – nothing else gets compressed.

XML

XML spaces supported “fixed length” compression in DB2 V9 but also needed a special processor to actually work.

LOB

Yes, you can nowadays also compress LOBs in Db2 12 and above, but it needs a zEDC Express feature installed to do so.

zEDC requires the following:

  • z/OS® V2R1 (or later) operating system.
  • One of the following:
    • IBM® z15®, or later, with the Integrated Accelerator for zEDC
    • IBM zEnterprise® EC12 CPC (with GA2 level microcode) or zBC12 CPC, or later, with the zEDC Express feature.
  • zEDC software feature enabled in an IFAPRDxx parmlib member. For more information, see Product Enablement for zEnterprise Data Compression.
  • Adequate 64-bit real storage that is configured to this z/OS image.

Huffman?

The first compression algorithms were Lempel-Ziv and the dictionary, of typically 4096 entries, each has a 12 bit “key” for the value to be replaced. This worked great, but over the years newer methods have appeared. One of them is the Huffman Entropy Compression. It is similar to Lempel-Ziv but sorts the data in the dictionary on frequency before assigning a variable number of bits to the value. (This is why the old method is called “fixed length” of course!) So, if you have millions of THE it would compress down to one single bit! A huge win over the 12 bits for Lempel-Ziv. But remember the great sayings: “It Depends” and “Your mileage my vary,” as not all data is good for compression and certainly not all data is good for Huffman compression.

Suck it and see!

IBM supplied a very nice utility, way back in the day, called DSN1COMP which has become very good over the years and is much more user friendly than when it first came out! Basically, you give it the VSAM cluster name of the object you wish to test. A handful of parameters to give it a clue about what you have defined (FREEPAGE, PCTFREE, PAGESIZE and DSSIZE), and then Bob’s Your Uncle – you get a nice report:

DSN1COMP Idiots Guide

The basic JCL looks like this:

//DSN1COMP EXEC PGM=DSN1COMP,                                        
// PARM='PCTFREE(0),FREEPAGE(0),ROWLIMIT(9999),REORG'                
//STEPLIB  DD DISP=SHR,DSN=xxxxxx.SDSNEXIT.xxxx                       
//         DD DISP=SHR,DSN=xxxxxx.SDSNLOAD                           
//SYSPRINT DD SYSOUT=*                                               
//SYSUT1   DD DISP=SHR,DSN=xxxxxx.DSNDBD.dbname.tbspace.I0001.A001

The parameters are quite long and complex…

PAGESIZE(nnK) – Must be the page size of the object you are running against. Get it wrong and you “might produce unpredictable results”.

DSSIZE(nnnG) – Must be the DSSIZE of the object you are running against. Get it wrong and you “might produce unpredictable results”.

If you omit these two options, DSN1COMP will attempt to extract the data from the header page. Depending on how old the VSAM/Full image copy dataset is, this data might be found – or not!

NUMPARTS(nnnn) – DSN1COMP assumes the object is not partitioned. Get it wrong and you “might produce unpredictable results”. For UTS data, this parameter is not used as DSSIZE takes care of the requirement.

FREEPAGE(nnn) – This must be set to the current FREEPAGE of the object. From 0 – 255. Default is 0.

PCTFREE(nn) – This must be set to the current PCTFREE of the object. From 0 to 99. Default is 5.

MAXROWS(nnn) – This must be set to the current MAXROWS of object. From 1 to 255. Default is 255.

FULLCOPY – Informs DSN1COMP that the dataset is a full image copy dataset. If it is a partitioned dataset then you must also use the NUMPARTS parameter.

REORG – This switches DSN1COMP from “LOAD” mode to “REORG” mode and generally gives a more accurate compression report as DSN1COMP then simulates full record compression and not “after 1.2MB rows” or whatever internal threshold is met. Even using REORG, you might not get a perfect match with the real compressed data as DSN1COMP uses sampling. Not valid for LOB spaces!

LOB – Informs DSN1COMP that it is now working with a LOB space – If used, *no* other parameters are allowed! You must have the zEDC for this option!

COMPTYPE(x-x) – HUFFMAN, FIXED or ALL. If not specified, DSN1COMP will check for hardware support and output HUFFMAN and FIXED, otherwise just FIXED.

ROWLIMIT(n-n) – An absolute must! From 1 to 99,000,000. Forget this and DSN1COMP will trundle through the *entire* dataset! I would set this to 99999 to begin with.

EXTNDICT(xxxxxxxx) – An eight-byte name for a generated externalized object deck compression dictionary to DD card DSN1DICT. This is not normally used.

LEAFLIM(n-n) – Limits the number of index leaf pages that DSN1COMP reads to calculate index compression rates. From 1 to 99,000,000. Default is all index leaf pages. Remember that index compression only compresses leaf pages. Note that this is the only parameter allowed for index datasets.

How it looks

DSN1999I START OF DSN1COMP FOR JOB BOXWELL$ STEP1                         
DSN1998I INPUT DSNAME = xxxxxx.DSNDBD.dbname.tbspace.I0001.A001  , VSAM
DSN1944I DSN1COMP INPUT PARAMETERS                                        
         INPUT DATA SET CONTAINS NON-COMPRESSED DATA                      
           4,096  DICTIONARY SIZE USED                                    
               0  FREEPAGE VALUE USED                                     
               0  PCTFREE VALUE USED                                      
                  COMPTYPE(ALL) REQUESTED                                 
           9,999  ROWLIMIT REQUESTED                                      
                  ESTIMATE BASED ON DB2 REORG METHOD                       
             255  MAXROWS VALUE USED                                      
DSN1940I DSN1COMP COMPRESSION REPORT                                                            
  HARDWARE SUPPORT FOR HUFFMAN COMPRESSION IS AVAILABLE                                         
  +------------------------------+--------------+------------+------------------+
  !                              !              ! Estimated  ! Estimated state  !
  !                              ! UNCOMPRESSED ! Compressed ! Compressed       !
  !                              !              ! FIXED      ! HUFFMAN          !
  +------------------------------+--------------+------------+------------------+
  ! DATA (IN KB)                 !        2,269 !        582 !              506 !
  ! PERCENT SAVINGS              !              !         74%!               77%!
  !                              !              !            !                  !
  ! AVERAGE BYTES PER ROW        !          235 !         62 !               54 !
  ! PERCENT SAVINGS              !              !         73%!               77%!
  !                              !              !            !                  !
  ! DATA PAGES NEEDED            !          589 !        154 !              134 !
  ! PERCENT DATA PAGES SAVED     !              !         73%!               77%!
  !                              !              !            !                  !
  ! DICTIONARY PAGES REQUIRED    !            0 !         64 !               64 !
  ! ROWS ... TO BUILD DICTIONARY !              !      1,149 !            1,149 !
  ! ROWS ... TO PROVIDE ESTIMATE !              !      9,999 !            9,999 !
  ! DICTIONARY ENTRIES           !              !      4,096 !            4,080 !
  !                              !              !            !                  !
  ! TOT PAGES (DICTNARY + DATA)  !          589 !        218 !              198 !
  ! PERCENT SAVINGS              !              !         62%!               66%!
  +------------------------------+--------------+------------+------------------+
                                                                                                
DSN1994I DSN1COMP COMPLETED SUCCESSFULLY,            605  PAGES PROCESSED                        

I have edited the report to make it a bit thinner!

As you can easily see, the original dataset is 589 Pages. After “normal” compression it is down to 218, which is a 62% reduction. However, with Huffman it squeezes down even more into 198 or 66%. So, according to my golden rule, it fits well! Once done and actioned by a REORG, remember to performance-test, as compression can bite you. The SYSIBM.SYSTABLES column PCTROWCOMP is used by the Db2 Optimizer for access plan selection! In other words: Test, Test and Test!

How do Indexes look?

JCL is similar, but different of course, due to the lack of parameters!

//STEP1  EXEC PGM=DSN1COMP,                                          
// PARM='LEAFLIM(9999)'                                              
//STEPLIB  DD DISP=SHR,DSN=xxxxxx.SDSNEXIT.xxxx                       
//         DD DISP=SHR,DSN=xxxxxx.SDSNLOAD                           
//SYSPRINT DD SYSOUT=*                                               
//SYSUT1   DD DISP=SHR,DSN=xxxxxx.DSNDBD.dbname.ixspace.I0001.A001
DSN1999I START OF DSN1COMP FOR JOB BOXWELL$ STEP1                         
DSN1998I INPUT DSNAME = xxxxxx.DSNDBD.dbname.ixspace.I0001.A001  , VSAM
                                                                          
DSN1944I DSN1COMP INPUT PARAMETERS                                        
                  PROCESSING PARMS FOR INDEX DATASET:                     
           9,999  LEAF LEAFLIM REQUESTED                                  
                                                                          
DSN1940I DSN1COMP COMPRESSION REPORT                                      
                                                                          
           9,999  REQUESTED LEAF LIMIT REACHED                            
           9,999  Index Leaf Pages Processed                              
         621,333  Keys Processed                                          
         621,333  Rids Processed                                          
          32,096  KB of Key Data Processed                                
          11,947  KB of Compressed Keys Produced       

    EVALUATION OF COMPRESSION WITH DIFFERENT INDEX PAGE SIZES
                                                             
    ----------------------------------------------           
 8  K Page Buffer Size yields a                              
51  % Reduction in Index Leaf Page Space                     
    The Resulting Index would have approximately             
49  % of the original index's Leaf Page Space                
    No Bufferpool Space would be unused                      
    ----------------------------------------------           
                                                             
    ----------------------------------------------           
16  K Page Buffer Size yields a                              
63  % Reduction in Index Leaf Page Space                     
    The Resulting Index would have approximately             
37  % of the original index's Leaf Page Space                
32  % of Bufferpool Space would be unused to                 
    ensure keys fit into compressed buffers                  
    ----------------------------------------------           
                                                             
    ----------------------------------------------           
32  K Page Buffer Size yields a                              
63  % Reduction in Index Leaf Page Space                     
    The Resulting Index would have approximately             
37  % of the original index's Leaf Page Space                
66  % of Bufferpool Space would be unused to                 
    ensure keys fit into compressed buffers                  
    ----------------------------------------------           
DSN1994I DSN1COMP COMPLETED SUCCESSFULLY,          9,999  PAGES PROCESSED                   

Here you see the huge difference in output!

The DSN1COMP computes the three possible new page sizes for your index (8, 16 and 32), which enables you to decide which one is “best” for the index to live in. In this example, the 8K BP is the clear winner as no space is wasted and it provides over 50% savings.

Problems?

The Db2 optimizer does *not* “know” that the index is compressed, but it *does* know which bufferpool you have moved it to. Remember to correctly size all of these new bufferpools so that you do not get paging or excessive flushing!

All clear?

Now it is clear: You must still weigh up the pros and cons here! Do not blindly compress the world and expect to save TBs of disk space! The CPU required to decompress/compress is not free, and the Db2 I/O Engines must also work for Index Compression.

Rules Of Thumb

Ignore very small rows – typically, it makes no sense to compress row sizes under 16 at all! Remember you can still only get a maximum of 255 rows in a page (even if compressed!) and so very small rows would hit that limit and make the exercise pointless.

Very large rows – take a row that is 4,000 and let’s say it compress at 45%, leaving you with 2,200 – It still only gets one row in a page. (Previous 4,000 fits due to the page size limit of 4,054 and now with 2,200 still only one is allowed!) In this case, I would change the bufferpool to actually realize any benefits.

There are cases where compressing (or using Huffman to compress more) can negatively affect an application that is using page-level locking.

Obviously, do not compress tablespaces defined with MAXROWS=1.

If the space saving is less than 10% – 20%, it generally makes no sense to use compression.

Db2 only actually does the compression if the row will then be shorter – It could well be, with lots of timestamp fields, that it ends up being the same size and so Db2 saves you CPU by not compressing it.

Compression is *not* encryption and should not be used as such! Furthermore, compressing encrypted data is probably pointless.

LOAD inserts data until a sweet spot is reached, and then it starts to insert compressed data (True for INSERT, MERGE and LOAD SHRLEVEL CHANGE of course).

REORG builds the dictionary in the UNLOAD phase so all rows, if eligible, will be compressed.

To share or not to share?

Sharing dictionaries can be a great idea or a terrible idea – your data decides! In the LOAD statement, if you are loading at PARTITION level, you may add “COPYDICTIONARY nnnn” to save the time of completely building a dictionary. This only makes sense if you *know* that the dictionary is good for your partition as well.

Keeping Dictionaries

You may decide to save all the time in building dictionaries by adding KEEPDICTIONARY to the LOAD or REORG utility cards. The only problem here, is sometimes data does change and a new dictionary will be required.

And what about Indexes?

Well, I tend to like the larger bufferpool here just to help reduce index page splits. So going to 8K and above can have a very nice knock-on effect into general performance and logging.

Looking for Candidates?

I would start with all big table partitions (over 199MB) that are not compressed at all and also perhaps take another look at any table partitions that compressed badly. Perhaps Huffman or rebuilding the dictionary could help? For indexes I would just list out the uncompressed ones larger than 99MB. Feel free to change the values!

Here’s some simple SQL to list out the interesting cases. For table partitions and tablespaces:

SELECT RTS.DBNAME                        
     , RTS.NAME                          
     , RTS.PARTITION                     
     , RTS.SPACE / 1024 AS MB            
     , TS.INSTANCE                       
     , TP.IPREFIX                        
     , TP.COMPRESS                       
     , TP.PAGESAVE                       
FROM SYSIBM.SYSTABLESPACESTATS RTS       
    ,SYSIBM.SYSDATABASE        DB        
    ,SYSIBM.SYSTABLESPACE      TS        
    ,SYSIBM.SYSTABLEPART       TP        
WHERE     RTS.DBNAME       = TS.DBNAME   
  AND NOT RTS.DBNAME    LIKE 'DSNDB0_'   
  AND     RTS.NAME         = TS.NAME     
  AND     RTS.DBNAME       = DB.NAME     
  AND     RTS.INSTANCE     = TS.INSTANCE 
  AND NOT DB.TYPE          = 'W'         
  AND     RTS.DBNAME       = TP.DBNAME   
  AND     RTS.NAME         = TP.TSNAME   
  AND     RTS.PARTITION    = TP.PARTITION
  AND     RTS.SPACE / 1024 > 199         
  AND    (TP.COMPRESS      = ' '         
      OR (TP.COMPRESS     IN ('Y' , 'F') 
      AND TP.PAGESAVE      < 20))        
ORDER BY 4 DESC, 1 , 2 , 3               
FOR FETCH ONLY                           
WITH UR                                  
;                                        

For indexes:

SELECT RTS.DBNAME                        
     , RTS.INDEXSPACE                    
     , RTS.PARTITION                     
     , RTS.SPACE / 1024 AS MB            
     , RTS.NLEAF                         
     , RTS.INSTANCE                      
     , IP.IPREFIX                        
FROM SYSIBM.SYSINDEXSPACESTATS RTS       
    ,SYSIBM.SYSINDEXES         IX        
    ,SYSIBM.SYSINDEXPART       IP        
WHERE     RTS.NAME         = IX.NAME     
  AND NOT RTS.DBNAME    LIKE 'DSNDB0_'   
  AND     RTS.CREATOR      = IX.CREATOR  
  AND     RTS.PARTITION    = IP.PARTITION
  AND      IX.CREATOR      = IP.IXCREATOR
  AND      IX.NAME         = IP.IXNAME   
  AND     RTS.SPACE / 1024 > 99          
  AND      IX.COMPRESS     = 'N'         
ORDER BY 4 DESC, 1 , 2 , 3               
FOR FETCH ONLY                           
WITH UR                                  
;                                        

What are your thoughts or experiences with compression? I would love to hear from you!

TTFN,

Roy Boxwell

2025-03 You deprecated, are?

This month I wish to bring along a few things of future interest. It all started in Valencia at the IDUG EMEA in 2024, shortly before the catastrophic floods. Haakon Roberts held his “trends and directions” presentation at the very beginning and this text was in his slide deck on page 4:

Now, we all know that IBM *never* announces “Db2 V14 is coming soon!” But they do talk about Vnext, and it is pretty obvious that this list will stop you migrating from Db2 13 to Db2 14 if any of these items exist or, even worse, are in use at your shop.

What Can You Do?

Well, way back in 2020, I wrote a Migration HealthCheck for Db2 z/OS program as our yearly “give-away” and guess what? It does all the stuff in that list *apart* from VTAM. Not only that, but in the intervening years Audit has grown and grown in importance. Then, in January 2025, DORA came out, for which I wrote another yearly “give-away” called SecurityAudit HealthCheck for Db2 z/OS . This checks everything an auditor would like to have checked, and *also* checks for VTAM/SNA usage in the Communication Database (CDB).

VTAM/SNA??

Cast your mind back about 25 years or so, and you might well remember that TCP/IP was some weird new-fangled way of sending “packets of information” – Not something responsible mainframers did at all! All we had was VTAM/SNA and 3270 with green screens – It was (is!) enough for us!

The Cheque is in the Post …

These days, VTAM/SNA has long overstayed its welcome, as it was a “I trust you” style of communication. The rationale was, “If you have logged onto one Db2 system, I must trust you on any other Db2 system – Who would lie to me?” So, it is not recommended any more and, in fact, with DORA and other Audit requirements it is a *bad* idea to even allow it to be “on its perch” – it must be carried out and buried!

When the B was big!

Back in DB2 V9 (remember when the B was BIG?), IBM brought in the IPNAME “ZPARM” to enable the complete disabling of VTAM/SNA communication in the DDF as it is was known, even way back then, to be an inherent security risk.

Why “ZPARM”?

Well, here is another twist to the story: IBM introduced this like a ZPARM but it is actually a parameter of the DDF and is stored in the Bootstrap Dataset (BSDS). So, run a DSNJU004 and you might see something like this:

LOCATION=xxxxxxxx IPNAME=(NULL) PORT=nnnn SPORT=nnnnn RPORT=nnnn
ALIAS=(NULL)                                                   
IPV4=xxx.xxx.xxx.xxx IPV6=NULL                                    
GRPIPV4=xxx.xxx.xxx.xxx GRPIPV6=NULL                              
LUNAME=xxxxxxxx PASSWORD=(NULL) GENERICLU=(NULL)


Here you can plainly see the IPNAME=(NULL) telling you it is not set and thus allows VTAM/SNA.

When DDF starts, it reports all of its parameters, slightly differently of course just to be awkward, in the xxxxMSTR output:

12.15.25 STC04347  DSNL003I  -xxxx DDF IS STARTING       
12.15.26 STC04347  DSNL004I  -xxxx DDF START COMPLETE 
   605                        LOCATION  xxxxxxxx         
   605                        LU        xxxxxxxx.xxxxxxxx
   605                        GENERICLU -NONE            
   605                        DOMAIN    xxx.xxx.xxx.xxx  
   605                        TCPPORT   nnnn             
   605                        SECPORT   nnnnn            
   605                        RESPORT   nnnn             
   605                        IPNAME    -NONE            
   605                        OPTIONS:                   
   605                         PKGREL = COMMIT

Here VTAM/SNA usage is not disallowed as IPNAME is -NONE. You can also issue the -DIS DDF command on your stand-alone Db2 subsystem or on all members of your Db2 data-sharing group and verify that the output looks like:

DSNL080I  xxxxx DSNLTDDF DISPLAY DDF REPORT FOLLOWS:          
DSNL081I STATUS=STARTD                                        
DSNL082I LOCATION           LUNAME            GENERICLU       
DSNL083I xxxxxxxx           xxxxxxxx.xxxxxxxx -NONE           
DSNL084I TCPPORT=nnnn  SECPORT=nnnnn RESPORT=nnnn  IPNAME=-NONE
DSNL085I IPADDR=::xxx.xxx.xxx.xxx

DSNL084I value IPNAME=-NONE is what you must look for and hopefully *not* find!

Now, in another subsystem, I have IPNAME set so the BSDS print looks like:

LOCATION=xxxxxxxx IPNAME=yyyyyyy PORT=nnnn SPORT=nnnnn RPORT=nnnn
ALIAS=(NULL)                                                    
IPV4=xxx.xxx.xxx.xxx IPV6=NULL                                     
GRPIPV4=xxx.xxx.xxx.xxx GRPIPV6=NULL                               
LUNAME=xxxxxxxx PASSWORD=(NULL) GENERICLU=(NULL)

and when DDF starts it reports:

12.15.36 STC04358  DSNL003I  -xxxx DDF IS STARTING  
12.15.37 STC04358  DSNL004I  -xxxx DDF START COMPLETE
   713                        LOCATION  xxxxxxxx    
   713                        LU        -NONE       
   713                        GENERICLU -NONE       
   713                        DOMAIN    -NONE       
   713                        TCPPORT   nnnn        
   713                        SECPORT   nnnnn       
   713                        RESPORT   nnnn        
   713                        IPNAME    yyyyyyy     
   713                        OPTIONS:              
   713                         PKGREL = COMMIT      
   713                         WLB = DFLT

The IPNAME is set to something and the LU is not set, even though it *is* set in the BSDS – much better!

Again, you can also issue the -DIS DDF command on your stand-alone Db2 subsystem or on all members of your Db2 data-sharing group and verify that the output looks like:

DSNL080I  xxxxx DSNLTDDF DISPLAY DDF REPORT FOLLOWS:               
DSNL081I STATUS=STARTD                                             
DSNL082I LOCATION           LUNAME            GENERICLU         WLB
DSNL083I xxxxxxxx           -NONE             -NONE             DFLT
DSNL084I TCPPORT=nnnn  SECPORT=nnnnn RESPORT=nnnn  IPNAME=yyyyyyy  
DSNL085I IPADDR=::xxx.xxx.xxx.xxx

Now in the DSNL084I the value IPNAME=yyyyyyy is what you must look for and hopefully find!

Delete Delete Delete

The last piece of the VTAM/SNA puzzle, is to clean up your CDB afterwards just to make sure no old and unused, and now completely unusable, definitions are lying around. They annoy auditors and so I would recommend deleting all of the VTAM/SNA definitions that you might still have. Our DORA give-away last year (SecurityAudit HealthCheck) listed all these out for you to do exactly that. It is well worth downloading and running as it is free! Putting it simply, just review, and then delete, all the rows in the tables SYSIBM. LULIST, LUMODES, LUNAMES and MODESELECT.

Hooray!

With these two freebies you can easily check if your systems are:

  1. Db2 Vnext ready
  2. DORA and PCI DSS V4.0.1 Compliant!

Pretty cool for nothing huh?

TTFN

Roy Boxwell

2025-02 It SIEMs good to me!

Hi! Excuse the horrible pun, but the SIEM world has intruded into my quiet COBOL / SQL world over the last month or two quite badly!

SIEM?

You hear it a lot but what exactly is it? It is Security Information and Event Management, hence SIEM. It is a field within computer security that combines Security Information Management (SIM) and Security Event Management (SEM) which enables real-time analysis of security alerts generated by applications and network hardware.

DORA

DORA kicked it all off for me with the EU bringing in brand new laws to make FinTech more resilient against cyber threats which in turn means doing more auditing and testing of the complete infrastructure.

WorkLoadExpert

Now SQL WorkloadExpert for Db2 z/OS (WLX) was designed many many moons ago for performance monitoring and tuning SQLs but over the last five to six years has slowly turned into an auditing tool. Naturally, we collect all the SQL on your machine to do the performance monitoring so this is a very good start for doing audit and test. Then we added more and more IFCID data to finally get the complete audit picture with our WLX Audit for Db2 z/OS.

A bit LEEFy on the Street…

This was all well and good but then of course came the time when the mainframe did not stand alone (At least from the auditor’s point of view!) which meant we had to create a method to transfer the data required for audit from the mainframe down to “the boxes you can carry”. The first way we developed was using the LEEF record format.

LEEF?

The Log Event Extended Format (LEEF) is a customized event format for IBM® Security QRadar®. QRadar can integrate, identify, and process LEEF events. LEEF events must use UTF-8 character encoding. Looking at the EBCDIC native data:

This is OPUT down to USS and then looks like:

Which is then iconv’d into UTF-8 which (after a DISPLAY UTF in browse) looks nearly the same:

The final step, for LEEF processing, was then a GZIP call which dramatically shrinks the size of the file but, of course, is not human readable any more:

CEF?

SPLUNK then came along…I love these names! So, we added the Common Event Format (CEF) to make it work deep down in the caves… That is mentioned in the company profile by the way: From its founding in 2003, Splunk has helped organizations explore the vast depths of their data like spelunkers in a cave (hence, “Splunk”).

LOGger heads?

Then we had some customers who did not want LEEF/CEF format but wanted a “direct to analyze tool” solution using SYSLOGGER and JSON. We then created the SYSLOGGER API which TCP/IP connects directly with your system logger and directly sends the data. Which, in trace mode, looks like this:

A Bit of a Stretch?

What we now have is even more customers asking for a mix of both of these systems. So, a JSON payload but in a flat file that can be directly ingested by Elastic… The things I do!!!

Here’s how the Elastic Common Schema (ECS) data looks when looking at the EBCDIC native data:

Just like LEEF it is then OPUT down to USS and iconv’d to UNICODE so it eventually looks like:

Here with Browse you can see the data again:

JSON Format?

Naturally, you can then ingest it as a JSON file which looks a little like this:

All done?

Probably never! But what SIEM systems are you all using out there? Have you all been “hit by the Audit” bug? I would love to hear your war stories!

TTFN,

Roy Boxwell

2025-01 Compress and Replicate This!

This month, I want to delve into one of those newish, but well-hidden, features of Db2 that arrived some years ago but did not get that much publicity!

Compress Yes!

We all love compression! It saves tons of disk space and, over the years, we even got Index Compression. Yes, I know it is *not* really compression at all – just the leading bytes – but it is better than nothing! We also got better, newer, Table compression algorithms (Looking at you Mr. Huffman!) and we always had the problems of un-compressing data if “we” needed to look at it off-line.

Dictionaries Anymore?

Db2, at least for Tables, uses dictionaries of varying sizes to save “common strings” mapped to small numbers. These dictionaries are pretty obviously only valid for one set of data from one table and/or partition. If you do a REORG after inserting a ton of new data then the new dictionary can be very different from the prior one. Db2 writes the *previous* dictionary to the log and we carry on fine …

Or Do We?

Well, what happens when you wish to read data from the log that was inserted *before* your current dictionary was created? This is, and will always be, a small technical problem! The programs out there now simply crawl back through the Db2 log looking for the compression dictionary that matches to your timescales. This obviously takes time, I/O and CPU!

Online?

Even if the dictionary you are looking for is the current one, accessing it might well cause DBD Locks to happen when it has to be opened, and it can even cause Db2 GBP dependency problems. The fun continues if the logs you need have been archived to tape, of course.

DCC

Data Capture Changes are the keywords here. DCC for the standard TLA. As you are probably aware, setting this at the table level enables Db2 to support data replication using the log. IBM Db2 call their version DataPropagator. The DDL syntax in CREATE/ALTER TABLE is DATA CAPTURE NONE/CHANGES:

DATA CAPTURE Specifies whether the logging of the following actions on the table includes additional information to support data replication processing:

• SQL data change operations

• Adding columns (using the ADD COLUMN clause)

• Changing columns (using the ALTER COLUMN clause)

NONE Do not record additional information to the log. This is the default.

CHANGES Write additional data about SQL updates to the log.

This Changes Everything!

Indeed, it does! DATA CAPTURE CHANGES (DCC and also CDC sometimes!) causes a full row to be logged for every update. If there was an update of just the very last byte of data in a row then it was a tiny log record of basically just that last byte. With DCC you always get the full row. For INSERT and DELETE you always got the full row anyway. Why the change? So that the replication software has it easy! Simple as that!

Side Effects?

There are a few side effects, of course. Mainly, the log size grows as you are logging more data, and any use of the SQL TRUNCATE will behave like a MASS DELETE (so no IGNORE DELETE TRIGGERS or IMMEDIATE options, for example). There are also a few ZPARMs that must all be reviewed and/or changed. Firstly, the ZPARM UTILS_BLOCK_FOR_CDC with options YES/NO and default NO. If set to NO then no utilities are barred from working on these DCC tables. If set to YES then:

  • CHECK DATA with DELETE YES LOG NO
  • LOAD with SHRLEVEL NONE or REFERENCE
  • RECOVER with TOLOGPOINT, TORBA, TOCOPY, TOLASTCOPY, or TOLASTFULLCOPY
  • REORG TABLESPACE with DISCARD
  • REPAIR with LOCATE DELETE

Terminated with Extreme Prejudice!

Will all be terminated. Why, you may wonder? Well, all of these can obviously *flood* the log with rows, millions of rows of data. If using data replication then their usage should be “evaluated” before attempting them – hence the ZPARM. Too many customers shot themselves in the foot with a badly timed LOAD, for example, so we got this “protection” ZPARM. Naturally, it is changeable online, so once you have reviewed the requirement for the utility you can still fire it off, if desired, and then, very quickly, switch back!

More?

Yes, there is always more! ZPARM REORG_DROP_PBG_PARTS with options DISABLE/ENABLE and default DISABLE. If set to ENABLE to allow REORG to drop empty PBG partitions after a successful REORG, and the table within is DCC, then this drop of empty partitions will be ignored! Finally, we have the ZPARM RESTRICT_ALT_COL_FOR_DCC with options YES/NO and default NO. It specifies whether to allow ALTER TABLE ALTER COLUMN for DCC tables. If it is set to YES you will get an SQLCODE -148 if ALTERing a DCC object using any of the options SET DATA TYPE, SET DEFAULT or DROP DEFAULT. Again, this is to stop accidental log and data replication flooding!

Compression Woes

So now you can fully see the quandary! If the rows are all compressed on the log you must access the dictionary to un-compress them on the “other side”, where you do the replication, or just locally if you want to see the data. As mentioned, getting our grubby little hands on the dictionary can cause performance issues so what can we do?

CDDS!

This arrived in Db2 11 for data-sharing people out there with a, and I quote, “GDPS® Continuous Availability with zero data loss environment.” It actually stands for Compression Dictionary Data Set and it brought in a couple of new ZPARMs:

  • CDDS_MODE with options NONE, SOURCE and PROXY and default NONE.
  • CDDS_PREFIX a 1 – 39 Byte z/OS dataset prefix for the VSAM Dataset where the “magic” happens.

VSAM???

Yep you read that right! The CDDS VSAM dataset (CDDS_PREFIX.CDSS) stores up to three versions of the dictionaries that your table/tableparts are using, thus enabling fast, problem-free access to the dictionaries. Naturally, the CDDS must be available to *both* systems – and this is why VSAM was chosen, of course!

Start Me Up!

Some new commands also came in here. -START CDDS instructs all members of a data-sharing group to allocate the CDDS and start using it. -STOP CDDS tells all members to stop using CDDS, close and deallocate it. This must be done before recovering the CDDS, for example.

So???

So now you have a possible way of getting great performance in data replication, but how do you get it all running and how do you review what’s going on? This is where REORG and a new standalone utility DSNJU008 come in.

Reorg Yes But No…

The way to initialize your CDDS is simply to REORG all of your tables/table partitions  with the INITCDDS option. Do not panic! If this option is set to YES it will *not* do a REORG! It will purely externalize the current dictionary and carry on. This means you can use a “generator” SQL statement like this:

SELECT DISTINCT 'REORG TABLESPACE ' CONCAT STRIP(A.DBNAME)
                        CONCAT '.' CONCAT STRIP(A.TSNAME)
                        CONCAT ' INITCDDS YES'
FROM SYSIBM.SYSTABLES A
WHERE A.DATACAPTURE = 'Y'
;

To get all your REORG cards, and then one mass pseudo Reorg later your CDDS is up and running!

Standalone

The DSNJU008 utility can be executed using JCL like this:

//CDDSPRIN EXEC PGM=DSNJU008

//STEPLIB  DD DSN=<Your Db2 exit lib>,DISP=SHR

//         DD DSN=<Your Db2 load lib>,DISP=SHR

//SYSUT1   DD DSN=<Your CDDS prefix>.CDDS,DISP=SHR

//SYSPRINT DD SYSOUT=*

//SYSIN    DD *

LIMIT(9999)

/*

Please refer to the Db2 for z/OS Utilities Guide and Reference for all the details about other keywords, but the interesting factoid is that VERSIONS are stored 1, 2, 3 with 1 being the most recent and the rest going backwards in time.

What Time Was It?

The program outputs everything you could ever wish for about your data sharing systems’ use of compression dictionaries. Including the Dictionary timestamp, which I find very nice as every now and again it really can pay to analyze the data with a new compression test, just to see if you can squeeze any more onto your SSDs!

Why All This Trouble, Again?

The idea behind all of this work is to make using IFCID 306 better, faster and cheaper. It also has a knock-on affect into the “vendor scape”, as lots of vendors offer tooling for data replication or log reading/analysis and they all should now work with this feature enabling you, the DBA, to be more productive at less cost. If their version of REORG/LOAD does *not* handle the CDDS then you must always schedule a new pseudo Reorg with INITCDDS afterwards to handle the VSAM Updates correctly.

Using It?

My question for this month, dear readers, is: Are any of you using this feature or are planning to use it soon?

TTFN

Roy Boxwell

2024-12 Security & Audit Check

Hi all! Welcome to the end-of-year goody that we traditionally hand out to guarantee you have something to celebrate at the end-of-year party! This time, I wish to introduce to you a full vulnerability check of your Db2 for z/OS systems.

DORA!

You should all be aware, and scared, of DORA by now. If not, read my prior newsletter 2024-11 DORA or check out my IDUG presentation or my recorded webinar. Whatever you do, you must get up to speed with DORA as it comes into force on the 17th Jan 2025 which is only one month away from the publishing date of this newsletter!

Not just Born in the USA!

Remember, DORA is valid for the whole wide world, not just businesses within the EU. If you do *any* sort of financial trading within the EU you are under the remit of DORA, just like you are with GDPR! Even if you are not trading within the EU block, doing a full vulnerability check is still a pretty good idea!

PCI DSS V4.0.1

We also now have the Payment Card Industry Data Security Standard (PCI DSS) V4.0.1 going live at the end of March 2025… Coincidence? I don’t think so. Mind you, at least the Americans do not fine anyone who fails!

What are We Offering?

This year, the product is called the SecurityAudit HealthCheck for Db2 z/OS or SAC2 for short. It is a very lightweight and robust tool which basically does all of the CIS Vulnerability checks as published by the Center for Internet Security (CIS) in a document for Db2 13 on z/OS:

CIS IBM Z System Benchmarks (cisecurity.org)

https://www.cisecurity.org/benchmark/ibm_z

This contains everything you should do for audit and vulnerability checking and is well worth a read!

Step-By-Step

First Things First!

The first thing SAC2 does, is evaluate *all* security-relevant ZPARMs and report which ones are not set to a “good” value. It then goes on to check that any default values have not been left at the default value. This especially means the TCP/IP Port number, for example. Then it finishes off by validating that SSL has been switched on for TCP/IP communications and that any and all TCP/IP ALIAS defs also have the correct settings.

Communication is Important!

Next up, is a full evaluation of your Communication Data Base (CDB). This data has been around for decades and started life for SNA and VTAM connections between Host Db2s. These days, SNA is dead and most of the connections are coming from PCs or Servers. That means that there *could* be a lot of dead data in the CDB and, even worse, ways of connecting to your mainframe that you did not even know, or forgot, existed! Think plain text password with SQLID translation for example!

Danger in the Details

Naturally, blindly changing CDB rows is asking for trouble, and if SAC2 finds anything odd/old/suspicious here, you must create a project to start removal. There is a strong correlation between “Oh I can delete that row!” and “Why can’t any of my servers talk to the mainframe anymore?”. The tool points out certain restrictions and pre-reqs that have to be done *before* you hit the big button on the wall! JDBC version for example.

Taking it All for GRANTed?

GRANTs can be the root of all evil! GRANT TO PUBLICs just make auditors cry, and use of WITH GRANT OPTION makes them jump up and down. Even IBM is now aware that blanket GRANTing can be dangerous for your health! SAC2 analyzes *all* GRANTs to make sure that PUBLIC ones are discovered on the Catalog and Directory as these should NEVER be done (with the one tiny exception of, perhaps on a good day when the sun is shining, the SYSIBM.SYSDUMMY1), then further checking all User Data as PUBLIC is just lazy. Checking everything for WITH GRANT OPTION is just making sure you are working with modern security standards!

Fun Stuff!

These days you should be using Trusted Contexts to access from outside the Host. This then requires Roles and all of this needs tamper-proof Audit Policies. On top of all this are the extra bits and pieces of Row Permissions and Column Masks. All of these must be validated for the auditors!

Elevated Users?

Then it lists out the group of privileged User IDs. These all have elevated rights and must all be checked in detail as who has what and why?

Recovery Status Report

Finally, it lists out a full Recovery Status Report so that you can be sure that, at the time of execution, all of your data was Recoverable.

It is a Lot to Process!

It is indeed. The first time you run it, you might well get tens of thousands of lines of output but the important thing is to run it and break it down into little manageable sections that different groups can then work on. This is called “Due Diligence” and can save your firm millions of euros in fines.

Lead Overseer

Great job title, but if this person requests data then you have 30 days to supply everything they request. Not long at all! SAC2 does the lion’s share of the work for you.

Again and Again and Again

Remember, you must re-run this vulnerability check on a regular basis for two major reasons:

  1. Things change – Software, Malware, Attackers, Defenders, Networks, Db2 Releases etc.
  2. Checks get updated – The auditors are alway looking for more!

Stay Out of Trouble!

Register, download, install and run today!

I hope it helps you!

TTFN

Roy Boxwell

Future Updates:

The SAC2 licensed version will be getting upgraded in the first quarter of 2025 to output the results of the run into a Comma Separated File (CSV) to make management reporting and delegation of projects to fix any found problems easier. It will also get System Level Backup (SLB) support added. SLB is good but you *still* need Full Image Copies! Further, it will be enhanced to directly interface with our WLX Audit product.