2022-07 IBM problem data requests…

Most of us have been there … something somewhere goes wrong … things are checked, changes are undone, tests are re-run and in the end you have no idea why a failure happens.

Who You Gonna Call?

Yep, it is time to open a Case at IBM technical support … So you open a Case and you type in as much detail as possible about when and what happened but it is *never* enough! In the world of Db2, the first question that *always* comes back is “Please supply us with further information”, like:

  • SYSLOG
  • Master Log
  • MEPL
  • Detailed EREP
  • Complete SVC dump

WTF? (“What’s That For” before anyone complains)

SYSLOG

The syslog is the console of a z/OS system and any and all interesting, and sometimes not so interesting, messages from *all* running “things” are in here – it is normally enormous! The problem begins when IBM Technical Support asks “please provide us with the SYSLOG from 06:00 to 06:30 on the day of the event”.

SDSF

SDSF is your friend here and I really mean it! All you do is go to SDSF and then enter primary command LOG. From this panel you enter three primary commands, one after another, and you are done!

  • PT ODSN ‘your.dataset.name’ * NEW
  • PT 06.00.00 22/06/2022 06.30.00 22/06/2022
  • PT CLOSE

That is it! Your dataset will then just have the data from between those times. This is *extremely* handy! Note that the date format is locale-dependent and, as I am in Europe, we have DD/MM/YYYY. I am sure you know your own date format!

Master LOG

This is the first SDSF dataset in your ssidMSTR STC. So, once more in SDSF, using *MSTR as a prefix and then putting line command ? next to the sub-system in question shows you three DDNAMEs. The first one, JESMSGLG, is the one they normally need. Here you use line command XDC to get an SDSF Open Print Data Set window:

xxxxMSTR STC09394           SDSF Open Print Data Set                         
COMMAND INPUT ===>                                         SCROLL ===> CSR
                                                                             
                                                                             
Data set name  ===> 'xxxxxxx.SYSLOG.PRINT'                                   
Member to use  ===>                                                          
Disposition    ===> NEW        (OLD, NEW, SHR, MOD)                          
                                                                             
Management class     ===>           (Blank for default management class)     
Storage class        ===>           (Blank for default storage class)        
  Volume serial      ===>           (Blank for authorized default volume)    
  Device type        ===>           (Generic unit or device address)         
Data class           ===>           (Blank for default data class)           
  Space units        ===> CYLS      (BLKS, TRKS, CYLS, BY, KB, or MB)        
  Primary quantity   ===> 19        (In above units)                         
  Secondary quantity ===> 19        (In above units)                         
  Directory blocks   ===>           (Zero for sequential data set)           
  Record format      ===> FBA                                                
  Record length      ===> 121                                                
  Block size         ===>                                                    
Data set name type   ===>           (LIBRARY, blank, ... See Help for more)  
Extended attributes  ===>           (NO, OPT, or blank)                      

Here you can see I choose type FBA, LRECL 121 and a disposition of NEW for a new dataset. Hit ENTER and SDSF tells you how many lines it just wrote to that file:

PRINT CLOSED  23025 LINE

View the file and max down to the bottom:

023019 0------ JES2 JOB STATISTICS ------        
023020 -  17 MAY 2022 JOB EXECUTION DATE         
023021 -            2 CARDS READ                 
023022 -       28,616 SYSOUT PRINT RECORDS       
023023 -            0 SYSOUT PUNCH RECORDS       
023024 -        3,099 SYSOUT SPOOL KBYTES        
023025 -    50,488.70 MINUTES EXECUTION TIME     

So we know we are in the correct file! Here you can do some updating of “sensitive” data like IP address, User Name etc. Remember to just change the data, not blindly delete it! Naturally, you can delete stuff *after* the event of interest and probably a ton of stuff from *before* but be careful what you delete!

MEPL

Say what? MEPL is the Module Entry Point List and IBM need it to see which PTFs and APARs have been applied in the application address space and the Db2 system. To get a MEPL I use a normal Utility job jcl with DIAGNOSE and a DISPLAY MEPL like this:

//MEPL     EXEC PGM=DSNUTILB,REGION=32M,       
//         PARM=(ssss,'DIAGNOSEMEPL')          
//STEPLIB  DD DISP=SHR,DSN=DSNsss.SDSNEXIT.ssss
//         DD DISP=SHR,DSN=DSNsss.SDSNLOAD     
//CEEDUMP  DD SYSOUT=*                         
//SYSUDUMP DD SYSOUT=*                         
//SYSPRINT DD SYSOUT=*                         
//SYSIN    DD *                                
 DIAGNOSE                                      
    DISPLAY MEPL                               
 DIAGNOSE END                                  
/*                                             

This will output the MEPL to SYSPRINT which starts like this:

DSNU000I    173 12:57:08.82 DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DIAGNOSEMEPL                               
DSNU1044I   173 12:57:08.83 DSNUGTIS - PROCESSING SYSIN AS EBCDIC                                                      
DSNU050I    173 12:57:08.83 DSNUGUTC -  DIAGNOSE DISPLAY MEPL                                                          
DSNU861I    173 12:57:08.84 DSNUDIAG - DISPLAY MEPL FOR SUBSYSTEM xxxx                                                 
    0000 20B92820 C2C5D7D3 0140D4C5 D7D360D3  C9D2C540 C6D6D940 C4E2D5E4 E3C9D3C2    *....BEPL. MEPL-LIKE FOR DSNUTILB*
    0020 28100000 C4E2D5C1 C1404040 F0F761F1  F461F1F6 E4C9F3F9 F3F9F340 00000000    *....DSNAA   07/14/16UI39393 ....*
    0040 28100100 C4E2D5C1 D7D9C840 F1F261F2  F361F1F5 F1F34BF4 F6404040 00000000    *....DSNAPRH 12/23/1513.46   ....*
    0060 28100200 C4E2D5C6 D4D5C6D4 F1F061F1  F761F1F8 E4C9F5F8 F8F4F040 00000000    *....DSNFMNFM10/17/18UI58840 ....*
    0080 28100240 C4E2D5C6 D7D4E2C7 F1F061F1  F761F1F8 E4C9F5F8 F8F4F040 00000000    *... DSNFPMSG10/17/18UI58840 ....*
.
.
.

It is quite long! Here in my test system nearly 5000 lines are written to SYSPRINT. Then, like with the ssidMSTR, I use ? against the job and then XDC against the SYSPRINT DD card this time to create another file with type FBA and LRECL 133 to get your.mepl.list.

Detailed EREP

Now it gets interesting… The EREP (Environmental Record Editing and Printing Program) is the API to the system LOGREC dataset where all “events of interest” on a z/OS LPAR are recorded. It contains far less than the console log but is a treasure trove of data for the IBM Technical Support.

Here’s my job to simply do a Detailed EREP as per IBM standards:

//*------------------------------------------------------------------*/
//*  EREP: DETAILED REP PRINT                                        */
//*------------------------------------------------------------------*/
//EREP     EXEC PGM=IFCEREP1,PARM='CARD'                               
//SERLOG   DD DISP=SHR,DSN=xxxxxxxx.LOGREC                            
//DIRECTWK DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(50,50))         
//EREPPT   DD SYSOUT=*,DCB=BLKSIZE=133                                 
//TOURIST  DD SYSOUT=*,DCB=BLKSIZE=133                                 
//SYSIN    DD *                                                        
ACC=N                                                                  
HIST=N                                                                 
ZERO=N                                                                 
PRINT=PS                                                               
TYPE=S                                                                 
/*                                                                     
//* IF REQUIRED YOU CAN ADD DATE, TIME RANGES TO FILTER DOWN           
//* WITHIN THE SYSIN LIKE:                                             
//* DATE=(YYDDD,YYDDD)                                                 
//* TIME=(HHMM-HHMM)                                                   

Do not forget to give your LOGREC DSN for the SERLOG DD. Most of the time I get just a few rows of output and then create another file using XDC from the EREPPT DD name but this time with type FB and LRECL 133 to get the.erep.list

Complete SVC Dump

If your Db2 system receives a dump, for whatever reason, it normally writes out an SVC dump to a special dataset that can be used to analyze what went wrong. It is very important that the SVC dump is complete and *not* partial …

Default Size

The default size is only 500MB which is way too small for a halfway decent production Db2 sub-system these days. It must normally be increased to at least 16000MB. To change this you issue a console command like:

CHNGDUMP SET,SDUMP,MAXSPACE=16000M

But make sure you have enough local page datasets space to handle your normal load PLUS the size of this dump dataset…auxilliary swapping (paging) while dumping is a painfully slow experience you do not want to suffer!

If successful, the SVC dump will be COMPLETE and then you are nearly done …

File Transfer

Most of the files I have described so far are quite small but the SVC dump is a monster. You must TERSE it using JCL like:

//AMATERSE  EXEC PGM=AMATERSE,PARM='SPACK'
//SYSPRINT  DD SYSOUT=*                   
//SYSUT1    DD DISP=SHR,                  
//             DSN=xxxxxxxx.xxxxxxxx       
//SYSUT2    DD DISP=(,CATLG),UNIT=SYSDA,             
//             DSN=xxxxxxxx.xxxxxxxx.TRS,  
//             SPACE=(CYL,(99,99),RLSE)   

I use the SPACK parameter which is, according to the documentation, much better at compression than the PACK parameter. Fun factoid of the day: SPACK is the “complex” format whereas PACK is the “simple” format – Gotta love IBM for that! IBM do prefer the TERSE style of compression, and please do *not* change the file ending! Then doing a ZIP has no real bonus and just confuses the automatic systems at IBM. Leave “.TRS” at the end and they know it has been TERSED.

Then download the xxxxx.xxxxx.TRS file as BINARY to the PC and all the other files as TEXT to the PC. Then simply upload by drag-and-drop to your IBM Case and you are ready for the next question!

Have you Switched it Off and On again?

I wish I never hear this about a mainframe Db2 problem!

I hope this was of some interest, and if you have any other Tips & Tricks about getting “standard” data to IBM, I would love to hear from you!

TTFN

Roy Boxwell

2022-04 A brief history of the Universal Tablespace (UTS) Part Two

This month I wish to finish off, what I started last month, my musings about UTS over the releases and years.

Db2 12 Changes to the UTS Picture

For Partitioned By Range (PBR), a brand-new space was created called the UTS PBR Relative Page Number (RPN) which was, in my opinion, the best thing in Db2 12! Quite simply, it allows the dynamic ALTERing of the partition and all of the related partitioned indexes DSSIZE on-the-fly even when a LOAD is running! This was great! Any users out there who have had a nightmare LOAD? Yep, now, as long as you are actively monitoring your data and index partition sizes, you can issue these ALTERs automatically and never hit the buffers like you do today.

DSSIZE gets propagated through

To enable this feature DSSIZE was improved to be settable at the data partition level and also extended to partitioned indexes. The available values were changed to allow any integer from 1 GB to 1024 GB. This allows extreme flexibility for all sizing requirements. Note, however, that NPSIs are still stuck in the middle of nowhere with just PIECESIZE as a helping hand…

Everything groovy???

So, what was wrong with this picture? Well, the change from PBR to PBR RPN was, shall we say, a little bit painful. The RID size got extended and as the RID is stored in *every* header page it required a tablespace reorg with partition-based inline image copies. Now, as you can well imagine, most people’s PBRs are pretty big, and allocating 4096 Virtual Tapes in parallel was just not going to happen! After a while IBM enhanced REORG so that you could put multiple copies on one tape, sort of like STACK, but much better – and not just for Tape but also for DASD. This has really accelerated the acceptance and usage of PBR RPN.

The future is bright!

Check this Blog entry:

https://www.idug.org/blogs/emil-kotrc1/2021/03/12/why-universal-table-spaces-uts

It is revealed that in Apollo (Db2 for z/OS vNext – Which has now been released as Db2 13 for z/OS), the ability to migrate from a PBG to a PBR will become available instead of the UNLOAD, DROP, CREATE, LOAD method which is the only way up until Db2 12. This will be very handy as the PBR RPN is the best way to go forward with large tablespaces (>60 GB) as long as you have *some* sort of available partitioning scheme, of course!

No more worries??

What do you need to worry about now? Well, you remember that huge LOAD I mentioned earlier that comes at the partition level? You must simply monitor the sizes of your partitioned objects and add a few GBs of space when required, on the fly, with no outage.

How?

Well, we have a little product called SAX+ which does exactly that! It starts the required OPx IFCID traces and computes the values against given thresholds, then automatically issues the ALTERs giving you a seamless experience with PBR RPNs. The only “problem” left now is when you are approaching the 1024 GB absolute PBR RPN physical limit. SAX+ warns you about this as well. Then you will have to either schedule a REBALANCE or a new LIMITKEY definition and REORG to spread the load again. However, when you know this well in advance it is no longer a serious problem!

Not yet at PBR RPN? – No problem!

PBRs which are not yet RPNs are monitored to warn when a threshold of usage in a data or index partition is exceeded. This also gives you more than enough lead time to get the REORG ready to switch it to RPN or just resize/rebalance your partitions.

What about PBGs?

PBGs are also fully supported within SAX+ as it can automatically add partitions, if desired, which additionally avoids SQLCODE -904’s. Plus, SAX+ adjusts the way it works depending on the MAXPARTITIONS and the actual number of allocated partitions. For example, MAXPARTITIONS 10 with a 90% warning would alert you when the ninth partition is allocated and in use. When the tenth partition gets allocated the warning switches from an LDS warning (Running out of available partitions) to a “last partition” filling up warning similar to PBRs which are not yet RPNs. This obviously helps a lot when you have MAXPARTITIONS 1 which is the IBM recommendation these days.

Anything else?

Naturally, SAX+ also takes care of the other problems that can catch you unawares:

  • Running out of Linear Datasets (LOB, PBG, XML, non-UTS space and NPSI)
  • Running out of space
  • Running out of extents and also badly fragmented extents
  • Running out of volumes
  • Running out of physical range for SEQUENCES or IDENTITY columns
  • Running out of physical range for Numeric Primary key columns
  • Running out of DBATs
  • Running out of space in SMS Storage Groups

All of the entries in the above list are very annoying when they hit you with no warning, especially running out of Linear Datasets. Think NPSIs here! Every PIECESIZE is an LDS and so you can run out of these quicker than you think.

LOG problems?

One last thing that SAX+ can help you with is detecting Db2 Log problems together with stalled logs… Not something you normally think of with a space management tool! When Db2 starts running out of space in active logs it issues the DSNJ110E message and I know shops who have “missed” this alert. The problem was that the Db2 Log SMS Storage Group was getting full *and* the tape offload had stalled… As you can imagine this scenario is not pretty, so checking you Log storage groups is probably a good idea to guarantee you do not hit this problem!

That’s enough about UTS for this month. If IBM bring out another TLA for a TLA I will scream!

TTFN,

Roy Boxwell

2022-02 ZPARMs never stop changing!

This month I want to go through some of the absolutely most important ZPARMs that control how your Db2 systems behave in a very significant manner. All of the following ZPARMs have a performance impact of some sort and we are always trying to squeeze the last drop of performance out of our Db2 sub-systems, aren’t we?

Starting with the Easy Stuff…

CACHEDYN. YES/NO, default YES. Should always be set to YES unless you do not care about saving dynamic SQL performance. Back a few decades ago, the recommendation was to have this set to NO as default! Hard to believe that these days, where most shops have 80% – 90% dynamic SQL during the day!

Now we Get to the Numerics!

OUTBUFF. 400 – 400,000, default 4,000. This is *extremely* important and you really should set it to the highest possible value you can afford in real memory! As a minimum, it should be 102,400 KB (100MB). This is the buffer that Db2 uses to write log records before they are “really” written to disk. The larger the buffer, the greater the chance that by a ROLLBACK the data required is in the buffer and not on disk. This is a big win and the default of 4,000 KB is crazy low!

Skeletons in the Closet?

EDM_SKELETON_POOL. 5,120 – 4,194,304, default 51,200. This is one of my personal favorites (I wrote a newsletter solely on this a few years ago) The default is way to small these days. I personally recommend at least 150,000 KB and actually even more if you can back it with real memory. Just like OUTBUFF, pour your memory in here but keep an eye on paging! If Db2 starts to page you are in serious trouble! Raising this can really help with keeping your DSC in control.

DBDs are Getting Bigger…

EDMDBDC. 5,000 – 4,194,304, default 23,400. The DBD Cache is getting more and more important as, due to UTS usage, the size of DBDs is increasing all the time. The default just doesn’t cut the mustard anymore so jump up to 40,960 as soon as you can.

DSC is Always too Small!

EDMSTMTC. 5,000 – 4,194,304, default 113,386. The EDM Statement Cache (really the Dynamic Statement Cache) is where Db2 keeps a copy of the prepared statements that have been executed. So when the exact same SQL statement with the exact same set of flags and qualifiers is executed, Db2 can avoid the full prepare and just re-execute the statement. This is basically a no-brainer and should be set to at least 122,880 KB. Even up to 2TB is perfectly ok. Remember: A read from here is *much* faster than a full prepare, so you get a very quick ROI and great value for the memory invested! Keep raising the value until your flushing rates for DSC drop down to just 100’s per hour, if you can! Remember to cross check with the EDM_SKELETON_POOL ZPARM as well. It always takes two to Tango…

How Many SQLs?

MAXKEEPD. 0 – 204,800, default 5,000. The Max Kept Dyn Stmts parameter is how many prepared SQLs to keep past commit or rollback. It should be set to a minimum of 8,000 or so. Raising this might well cause a large memory demand in the ssidDBM1 address space so care must be taken.

RIDs Keep Getting Longer…

MAXRBLK. 0, 128 – 2,000,000, default 1,000,000. RID POOL SIZE is the maximum amount of memory to be available for RID Block entries. It should be at least 1,000,000 and, if you can, push it to the maximum of 2,000,000. Unless you want to switch off all RID Block access plans in which case you set it to zero – Obviously not really recommended!

Sorts Always Need More Space

MAXSORT_IN_MEMORY. 1000 to SRTPOOL. The maximum in-memory sort size is the largest available space to complete ORDER BY, GROUP BY or both SQL Clauses. Remember that this is per thread, so you must have enough memory for lots of these in parallel. The number should be between 1,000 and 2,000, but whatever value you choose, it must be less than or equal to the SRTPOOL size.

Sparse or Pair-wise Access?

MXDTCACH. 0 – 512, default 20. Max data caching is the maximum size of the sparse index or pair-wise join data cache in megabytes. If you do not use sparse index, pair-wise join, or you are not a data warehouse shop, then you can leave this at its default. Otherwise, set it to be 41 MB or higher. If it is a data warehouse subsystem, then you could set this as high as 512 MB. (This ZPARM replaced the short-lived SJMXPOOL, by the way.)

Sort Node Expansion

SRTPOOL. 240 – 128,000, default 10,000. SORT POOL SIZE is the available memory that is needed for the sort pool. The default is 10,000 KB and should really be set to 20,000 KB at least, if not more! IFCID 96 can really help you size this parameter. Remember that the number of sort nodes leapt up from 32,000 in Db2 11 to 512,000 nodes for non-parallelism sorts and 128,000 nodes for a sort within a parallel child task in Db2 12. This means raising this ZPARM can have an even greater positive effect than before.

Your “Top Ten List”

These ten ZPARMs really influence how your Db2 system works and so must always be changed with great care and attention to detail. Always do a before and after appraisal to see whether or not changing them helped or hindered your system!

And Finally…

I have updated our Pocket Tool, Performance Health Check, to check and report all these ZPARMs, as well as all the other checks like the 6 Byte RBA/LRSN or the Mapping table changes or the reason for REORG etc etc. Feel free to download and run it, as it is but a click away!

If you have any comments, or other ZPARMs you think are also important for performance, feel free to drop me a line!

TTFN,

Roy Boxwell

2022-01 Fazed by phases?

This month I wish to do a quick run through, and review, of the effects of REBIND with active packages and how phases, both in and out, have given us some interesting problems to deal with!

Phase-In

As I mentioned in an earlier blog, the phase-in/phase-out was a very good idea indeed! It finally meant that we could do REBINDs whenever we wanted and no-one had to shut down servers to simply action a REBIND. This was always especially galling when it was an “empty” package or just to get an FL upgrade.

Allowed?

Remember, Phase-in is only allowed with PLANMGMT(EXTENDED) style packages and if the package is not a generated package for a trigger or SQL routine, such as a procedure or user-defined function.

Problems ahead?

The problems began soon after it was introduced in June 2019 with PH09191 and FL505. Now, it looked really good, and indeed it was, but it had one little flaw… it still required a SIX lock to do the business.

Problem solved?

This was solved in January 2021 with APAR PH28693 which changed the lock from a SIX to a much better U lock. With this APAR all of our problems were fixed and life would be good!

Nope…

Sadly, we then found out that the inactive/phased-out packages were actually causing rapid space growth in the SYSPACKCOPY (Not in SYSPACKAGE!) Remember that the phased-out packages all get copied across with a new number to the “back-up” SYSPACKCOPY for use until the active thread finally disconnects.

The Problem Gets Worse

Now these backed-up, old packages are sitting around in SYSPACKCOPY waiting until the next REBIND comes along whereupon Db2 will attempt a FREE. Various customers noticed that, for their intensely used packages, this free never really happened and so they “hit the buffers” of 14 copy_ids for packages… very nasty!

Another APAR to the Rescue!

IBM then created a new APAR, PH33295, that enhanced the FREE PACKAGE command to have a new PLANMGMTSCOPE sub-clause PHASEOUT. All this does is delete all of those old, no longer used packages. However, I can testify that doing a:

FREE PACKAGE(*.*.(*)) PLANMGMTSCOPE(PHASEOUT)

in production is one of the scariest commands I have ever issued!!! Personally, I would have preferred a brand new command like FREE PHASEDOUT PACKAGES or so…

Invalid as Well

While you are ridding yourself of 100’s of packages you could also issue the PLANMGMTSCOPE(INVALID) command which removes the phased-out and also gets rid of the dead, invalid packages which cannot ever be used anyway.

Do not Forget to Tidy up!

Once all these commands have been done, a REORG of the tablespace DSNDB06.SYSTSPKC is very highly recommended!

How have you experienced phase-in and phase-out? Do you think it’s a great feature or not?

Whatever you think, I would love to hear from you!

TTFN,

Roy Boxwell

2021-11 When is an NPI not an NPI?

This month, I wish to discuss everything about indexes but focusing on Non-Partitioned Index’s (from now on NPIs). I must confess that the real name of an NPI is a Non-Partitioned Secondary Index but I really dislike that as a term…

In the Beginning was the Index

Well, actually, it was the data, but then we needed a quick way of accessing the data and so the index was born. In Db2, these are b-tree based indexes with a few system pages and then a hierarchy of non-leaf pages starting with the root page, which then branches down in levels until it finally hits the leaf page – and these contain the direct pointers to the data. Very fast and just a few getpage I/Os to read the non-leaf/leaf pages in.

Hope Springs Eternal

The hope is that all the system stuff, the root page, and quite a few other non-leaf pages will be hanging around in the bufferpool and so no real I/O is required, but you all know the real world? It never is like that!

Bigger and Bigger

Tables and tablespaces got bigger and bigger over time, so then the Partitioned Tablespace was born. At first it required a partitioning index (PI), but nowadays you should all be using table based partitioning.
Brief aside: Recently, there was a mega outage caused by a really nasty bug in Db2 that brought everything crashing down (PH41947). Root cause was the use of index-based partitioning – still! Do IBM test all these old things on all levels of code? Nope. Time to get my Migration HealthCheck freeware and migrate away from index-based partitioning as fast as you can!

A Star was Born

Well ok, not really a star but the PI started life a little bit dazed and confused, and the formula for calculating “how big can my PI can get before it goes *boom*” is also impressive! (Check out my Know Your Limits presentation for the formulae!)

And Then Came the NPI

Now this PI was naturally unique, but we all needed a way to check for other data and possibly non-unique column combinations. So the non-partitioned index (NPI) was born. This is an index over the *complete* partition range and was normally heavily used by application SQL.

Pain points began

First major problem was the REORG of a single partition. It did the Tablepart (TP) data and it did the Index Part (IP), but it also had to update all the pointers within the NPI. This used to be terribly slow. At one shop, when reorging 22 partitions of a 254 partition TS it took 26 hours at the TP level and only 18 hours at the complete TS level. This is when we added to our RealTime DBAExpert utility generation software the ability to “roll up” from a TP reorg to a TS reorg based on number of partitions being REORGed, percentage of partitions being REORGed, or whether or not at least one NPI existed.

A Piece of Cake?

The other major problem with NPIs is that they can have a PIECESIZE, which is how large a Linear Page Set (LPS) can get before a new one is created. You can have from 1 to 4096 pieces for a single NPI, and the calculation to work out how many pieces you are allowed to have is another brain-numbingly complex one – which is also in my limits presentation.

Some sample SQLs for you:

-- FIRST YOU MUST CALCULATE THE MAXIMUM NUMBER OF POSSIBLE PARTITIONS
SELECT INTEGER(MIN(4096 , POWER(2E00 , 32)
/ (
(64E00 * 1024E00 * 1024E00 * 1024E00) -- DSSIZE TS
/ (32 * 1024) -- TS PGSIZE
)
)) AS MAX_NBR_PARTS
FROM SYSIBM.SYSDUMMY1 ;
-- THEN YOU PLUG THAT NUMBER INTO HERE
SELECT MIN(
(64E00 * 1024E00 * 1024E00 * 1024E00) -- DSSIZE TS
, (POWER(2E00 , 32) / 2048 -- NBR PARTS
) * 4096 -- IX PGSIZE
) AS MAX_INDEX_SIZE
FROM SYSIBM.SYSDUMMY1 ;
-- FOR AN NPI YOU JUST NEED THE PIECESIZE AND THE INDEX PAGESIZE
SELECT INTEGER(MIN(4096 , POWER(2E00 , 32)
/ (
(8E00 * 1024E00 * 1024E00 * 1024E00) -- PIECESIZE
/ 4096 -- IX PGSIZE
)
)) AS MAX_PIECES
FROM SYSIBM.SYSDUMMY1 ;

Death by Index

Suffice it to say you can be killed by running out of space in your PI, running out of space in your NPI, or even running out of NPI LPS pieces…

A New Type of Index

A brand new index type, data-partitioned secondary index, DPSI was created in Db2 V8 so that you could access partitions without having to scan thousands of datasets. Naturally, DPSIs only work if the application SQL is cognizant of their definition and index. Failure to do so can give terrible performance. These are naturally also bound to their respective partitions. It was also at this time that NPIs got rebranded to be NPSIs…

Separation for Utilities is Good

The major utility benefit that partitioned TSs give, is the ability to run them in parallel, including RECOVER. It was a really great idea so that you would only REORG, COPY or RUNSTATS the partitions that required it, and all the partition level utilities could run in parallel.

Db2 9 was All Change

To make these REORGS faster, IBM changed the way REORG worked with partitioned objects in Db2 9. They did speed it up but it meant that you could no longer work in parallel! You could add ranges to the PARTs being reorged but still not in parallel. This caused much grief and gnashing of teeth amongst loads of customers. Thus, we developed the so-called NPI Job. Anything that has a dependency on an NPI could be squirreled away in the NPI job(s) to then be run sequentially by the job scheduler after all other jobs were finished. Nasty!

Faster, Better, Cheaper

Now, after over 35 years, the b-tree is showing its age. IBM came up with the FTB or FIT which, simply put, copies all the non-leaf pages into a private area of the bufferpool and then, using L2 cache-aware code, removes the need for I/Os, apart from the very last two, leaf and data page, and thus reduces I/O and CPU.

Limits Again

Of course this is not for *every* index. Only unique 64 bytes long or less with no IOE, TIMESTAMP with TIMEZONE, or versioning allowed. But hey, it was a start!

Duplicate Allowed

Now even duplicate indexes are there. Well, to start with, only 56 bytes or less but we are getting there! You will require PH30978 UI75643 and then set the new ZPARM FTB_NON_UNIQUE_INDEX to be YES.

Is an NPI Always an NPI?

Now to the core of the matter. As you should all be aware we can, talking about base tablespaces, only create Universal Table Spaces (UTS) in the three flavors we know and love – Partitioned by Growth, Partitioned by Range, and Partitioned by Range Relative Page Numbering.

And???

Well, imagine you have a TS with MAXPARTITIONS = 1 – by definition it cannot go to multiple partitions but all indexes created on it are also by definition NPIs. This is also sort of true for MAXPARTITIONS > 1 if the current number of defined and existing partitions (NUMPARTS) is equal to one.

“Fake” NPIs

These are therefore, in my humble opinion, fake NPIs – You could argue that the NUMPARTS = 1 is “risky”, as Db2 could add a new partition by the next INSERT/UPDATE, but I think the risk is low. For these cases I think the jobs should not go to special NPI handling.

What are your Db2 maintenance jobs doing with these fake NPIs?

I would be very interested to hear from you all about your thoughts and plans here!

TTFN

Roy Boxwell

2021-08 ICI – Db2 12 Update

This month, I would like to review the ICIs that we have had for a few releases plus those that have recently appeared, and then the trouble with twelve …

It began …

Db2 development realised that something had gone horribly wrong when a bunch of Db2 users suddenly found that the output from their queries was no longer what it should be … After a bit of digging, the CHAR format rewrite was found to be the root cause and the fix was hastily created – BIF_COMPATIBILITY ZPARM.

What’s in a name?

Well, then VARCHAR happened and along came a very unpleasent problem with JAVA timestamps and, as I have documented in earlier BLOGs, it all started getting silly with one ZPARM being used for multiple format problems.

Along came the ICI (Incompatible Change Indicator)

So in Db2 10 we got a new IFCID, the 366, which was spat out at *every* prepare (bind) of any SQL that, possibly, contained an ICI. Now we started off pretty small with just three ICI’s: the first two being the reformatted output of CHAR and VARCHAR and the third being the TIMESTAMP format problem.

Db2 10 updates for Db2 11

Here they brought out numbers four to nine to handle all the little changes in Db2 11 so that you got the alert in Db2 10 before it bit you in Db2 11 – all well and good.

Db2 11 updates

The big change, was the brand new IFCID 376 – which is the evil twin of the 366. The only difference being that Db2 cached the entries, so you basically got a rolled up 366 – apart from one tiny little detail. The Execution count was missing. For the 366 it is 1:1, but for the 376 it is 1:nnnnn which could be any positive integer. They then added the 11nn range, going all the way up to 1111, and then they brought in 1112 for empty XML tags. Now all of these have been discussed in my earlier blogs.

What’s new in the ICI World?

Db2 12 of course! They brought out 1201 very early on due to POWER causing a problem. The output on overflow changed from a negative to a positive SQLCODE, which can of course cause “problems”… Why did this change even happen? IBM rewrote the code from using LE 32 bit assembler math calls to using C, and so the function “knew” if it overflowed and could return a warning saying so, whereas the 31 bit assembler just died a death and you got a negative SQLCODE.

Naming Convention?

Then it went quiet for a while until something weird happened: 1215031 and 1215032 appeared. Now, at first, I liked the idea of putting the FL into the ICI, but then I realised it was actually pretty pointless and just made it more confusing !

1215031 is issued when you could qualify a row with NULL in the DATA CHANGE OPERATION column using the FOR SYSTEM_TIME FROM/BETWEEN predicate on a system period temporal table with AUDITING.

1215032 is issued when you attempt to call stored procedure SYSPROC.SET_MAINT_MODE_RECORD_NO_TEMPORALHISTORY as this is no longer supported for data replication calls.

1204 (note the FL has gone…) is issued when you use CURRENT_SERVER or CURRENT_TIMEZONE as a column or variable name.

In the Docu it says 1202, but you actually get a 1215031 – and there’s no mention of 1215032 and 1204!

More new ones

Meanwhile, not (nothing?) to do with Db2 12, IBM also brought out

11 for using SELECT INTO syntax with a UNION

[12 was thankfully skipped!]

13 for INSERT/UPDATE/DELETE using an attribute WITH UR

Both of these were more parsing bugs than anything evil, but both require code changes if they appear!

Utility Time

Now IBM have enhanced the LOAD utility with LOAD FORMAT DELIMITED for correct packed numeric data support when one or more virtual decimal digits exist. This has caused another problem, as the data loaded could be viewed (as it was by one of our customers), as inconsistent. IBM then created another APAR to roll back the change and introduce a new ZPARM LOAD_DEL_IMPLICIT_SCALE to control how these numbers should be loaded. Default is NO, like it used to be, with an implied decimal point at the far right of the data. At the same time, it will now alert users that they could have an incompatibility with the LOAD by changing IFCID 25 to set a new bit. This warns you that you have done a LOAD into a table where there is packed decimal data with one or more digits after the virtual decimal point. If YES, then load interprets the Scale setting in the LOAD statement. For details please see APARs PH28104 and PH36908.

Pain Point for you?

The above mentioned new notification is a bit strange (pardon the pun), as there is already the IFCID 376 for incompatibilities. Now you must also start the IFCID 25 and go checking bits. So my question to you all is: Have you got this problem and, if so, do you think it is worth it to integrate IFCID 25 bit checking into SOFTWARE ENGINEERINGs/SEGUSs current ICI/BIF Use Case in our WorkLoad Expert and/or our BIF/ICI Freeware software?

The future is bright

As far as ICIs are concerned they just keep on rolling!

As always if you have any comments, especially with regard to IFCID 25, please feel free to e-mail!

TTFN

Roy Boxwell

2021-07 IDUG NA 2021 Virtual – A Roy round-up

This month I would like to “roll through” the recently held IDUG NA Virtual 2021 – Another virtual conference, but still full of tons of good stuff! Read on dear reader to see what piqued my interest!

Do you like being Profiled?

Profiling is getting much better (not easier to use mind you!) The best bit was “Remove ‘ping’ and validation queries” and the two easiest ways to do this are either to enable Sysplex WLB, or to use the isVALID() JAVA method. The typical SELECT * FROM SYSIBM.SYSDUMMY1 is a real pain …

PBG Sad future…

The idea of PBGs has, from my POV, changed quite dramatically. From the “Set it to 100 MAXPARTITIONS and forget it” to “Set MAXPARTITIONS to 1 and DSSIZE to 64GB”. I still think this is more of a design flaw than a good idea. If the spacemap search is such a performance killer – fix it …

Performance tidbits

Just using the “new” feature PCTFREE xx FOR UPDATE yy or the PCTFREE_UPD Zparm is a bit of a no-brainer if you have variable length or compressed records and lots of updates that then cause an Overflow record, (better called an indirect reference) – these bloat your data and cause more getpages over time. A sort of creeping death! The only cure is periodic REORGs to remove all of these indirect references, but it is better to delay this as long as possible – hence the FOR UPDATE percentage is a very nice feature!

From Db2 11, you could make your DGTTs NOT LOGGED which can give dramatic performance boosts! Naturally, it is a code change in the create DDL. You must add NOT LOGGED to the back end of the DECLARE statement but there have been really amazing savings like 60% elapsed time reduction and greater than 20% cpu savings! Of course, your mileage may vary. Always test these things before blindly changing all of your DECLARE statements!

zHyperWrite: If you are running with DS8000 and Metro Mirror (PPRC) then this really is a great idea! Update the IECIOSxx HYPERWRITE=YES, issue the SETIO HYPERWRITE=YES z/OS command and set ZPARM REMOTE_COPY_SW_ACCEL to ENABLE to get large reductions in active log writing times! Note that HyperSwap must be enabled as well.

zHyperLink (Baby brother of zHyperWrite) is also fantastic if you have intensive reads (especially sync reads) and lots of active log writing. (Who does *not* have this combo???) Set the zHyperLink to ENABLE for both read and write, DATABASE for read only or ACTIVELOG for write only. Downside is, you may see cpu increase slightly but the payback of elapsed reduction (over 20%) is well worth it.

Too many statistics can be bad for you

Also heard a few times was the statement “Never collect STATISTICS as part of REORG INDEX” – I have firmly believed in this for years and am very happy that it is now the modern mantra!

Run RUNSTATS rarely, if at all! How often does the statistical data actually change? During a REORG – Always! But “on its own” ? Why? Creation/Change of index – Naturally. Mass delete and LOAD with brand new data – Clearly! Insert another million rows? Really needed?? Go logarithmic like our RTDX RunstatsAvoidance feature.

Utility updates

High speed REORG: SORTDATA NO RECLUSTER NO – Just for actioning DDL as fast as possible. Downside is the CLUSTERRATIOF might be killed and, even worse in my humble opinion, all of the RTS counters are set to zero! Of course, if you are clever, you could simply back up the relevant RTS rows before the REORG and replace them afterwards…

REORG got a parallelism boost when building PIs (PH28092 & PH33859), naturally only when no NPIs exist. If running on the z15, think about setting ZPARM UTILS_USE_ZSORT (PH28183) to improve performance.

LOAD got a bunch of improvements (PI96136, PH11255, PH19073, PH23105) and the capability to override row change timestamps with PH25572. It also got the ability to skip the MAXASSIGNEDVALUE update in Db2 12 with PH28476.

RUNSTATS got the new STATCLGSRT ZPARM (PI74408 and PH03678) which then added the STATCLGMEMSRT parameter. RUNSTATS also got an update to avoid “overdoing things”:

  • If BOTH or HISTOGRAM then the COUNT is limited to be 100
  • If sampling value is less than 50% and FREQVAL COUNT BOTH or LEAST is specified then change it to be MOST

These avoid flooding the SYSCOLDIST with basically useless data that just slows down PREPARE and BIND/REBIND as well as possibly causing the optimizer to make a bad judgment.

TEMPLATE support also finally got Large Block Interface! New BLKSZLIM parameter in PH30093.

Compressed?

Generally speaking, random table access will show better performance than sequential table access for compressed data, as the row is only decompressed if required.

Generally speaking, sequential index processing (index scans) works much better than random index access for compressed indexes. The opposite of tables! Why? The entire index page must be read in and decompressed in one call for indexes.

So, once again, know your data, know your access patterns!

New Compressed?

Huffman Object Compression: Came along in FL509 and is well worth a look, as it can give very good results but, as always, test and check before blindly setting all compression to Huffman!

Index room for growth?

On index design the Holy Grail is to set the PCTFREE equal to the number of new index entries between reorg runs divided by the totalentries. FREEPAGE should be used to avoid index page splits. It may also help to increase the index page size and go to index compression.

Last mix

DataSharing OA59122 which went PE and the OA60394 that fixed it, both correct a problem of “contention management” that can be CPU expensive when contention has gone – but what happens next?

Machine Learning – Unsupervised (sounds like me most of the time) … especially “data munging” …

And finally…

A whole ton of really interesting stuff all about deprecated items… oh wait a second, that was my presentation! Anyway, good for a read and learn all about the nasty things lurking in the catalog! Do not forget we have freeware that shows it all to you! MigrationHealthCheck (MHC) is the name!

As always it is a pleasure to hear any comments or criticisms!

TTFN,

Roy Boxwell

2021-06 How large is my system?

After I held my Deprecated Items IDUG talk at the Virtual 2021 North American IDUG one of the attendees asked me if I also had some SQL for quickly retrieving the current “size of the database” – Normally I use our very own SpaceManager product for this as it uses CSI access calls to get up to the second info about the size of the VSAM LDS, however for a quick and dirty query the Db2 catalog can be partially trusted!

Where to begin?

The easiest places to start are the two SYSIBM.SYSxxxxxPART tables. These contain all the partition details and if the PARTITION is zero then it is a non-partitioned tablespace or it is a NPSI defined index.

What to select?

SELECT COUNT(*)                          AS INDEX_PARTS
      ,SUM(IP.SPACEF)                    AS KB
      ,INTEGER(SUM(IP.SPACEF) / 1024)    AS MB
      ,INTEGER(SUM(IP.SPACEF) / 1048576) AS GB
FROM SYSIBM.SYSINDEXES   IX
    ,SYSIBM.SYSINDEXPART IP
WHERE     IX.NAME    = IP.IXNAME
  AND     IX.CREATOR = IP.IXCREATOR
  AND NOT IP.SPACE   < 0
;

This query just shows you the current size of all of your indexes. Notice the NOT IP.SPACE < 0 – This is to not count any DEFINE NO indexes as these use no space of course! I simply added the two INTEGER functions – naturally you can change these to any format you would like.

Table time!

SELECT COUNT(*)                          AS TABLE_PARTS
      ,SUM(TP.SPACEF)                    AS KB
      ,INTEGER(SUM(TP.SPACEF) / 1024)    AS MB
      ,INTEGER(SUM(TP.SPACEF) / 1048576) AS GB
FROM SYSIBM.SYSTABLESPACE TS
    ,SYSIBM.SYSTABLEPART TP
WHERE     TS.NAME   = TP.TSNAME
  AND     TS.DBNAME = TP.DBNAME
  AND NOT TP.SPACE  < 0
;

This does the same but now for tableparts of course!

More detail?

Now these two SQLs are OK for quick results but it is probably more interesting to see which index types and which tablespace types are taking up how much space on your disks. So here the expanded queries follow:

SELECT COUNT(*)                          AS INDEX_PARTS
      ,IX.INDEXTYPE
      ,IX.IX_EXTENSION_TYPE
      ,IX.PAGENUM
      ,SUM(IP.SPACEF)                    AS KB
      ,INTEGER(SUM(IP.SPACEF) / 1024)    AS MB
      ,INTEGER(SUM(IP.SPACEF) / 1048576) AS GB
FROM SYSIBM.SYSINDEXES   IX
    ,SYSIBM.SYSINDEXPART IP
WHERE      IX.NAME    = IP.IXNAME
   AND     IX.CREATOR = IP.IXCREATOR
   AND NOT IP.SPACE   < 0
GROUP BY IX.INDEXTYPE
        ,IX.IX_EXTENSION_TYPE
        ,IX.PAGENUM
ORDER BY IX.INDEXTYPE
        ,IX.IX_EXTENSION_TYPE
        ,IX.PAGENUM
;

The INDEXTYPE column can be Blank (For ancient Type 1 indexes which I hope no-one has), 2 for “normal” Type 2 Indexes, D for Data-partitioned Secondary Indexes (DPSIs) and P for Indexes which are both Partitioned and Partitioning (which I also hope no-one has anymore!)

What’s in a TYPE?

The IX_EXTENSION_TYPE column can be Blank for “normal” indexes, N for Node ID indexes on XML spaces, S for Scalar Expressions, T for Spatial or V for real XML Indexes.

Using new features yet?

PAGENUM is just telling you how many of your indexes are enjoying the great feature of fully variable DSSIZE, adjustable on-the-fly with no outage (As long as you adjust upwards of course!) If set to “A” for Absolute then sadly not, much better is “R” for Relative!

SELECT COUNT(*)                          AS TABLE_PARTS
      ,TS.TYPE
      ,TS.PAGENUM
      ,SUM(TP.SPACEF)                    AS KB
      ,INTEGER(SUM(TP.SPACEF) / 1024)    AS MB
      ,INTEGER(SUM(TP.SPACEF) / 1048576) AS GB
FROM SYSIBM.SYSTABLESPACE TS
    ,SYSIBM.SYSTABLEPART  TP
WHERE     TS.NAME   = TP.TSNAME
  AND     TS.DBNAME = TP.DBNAME
  AND NOT TP.SPACE  < 0
GROUP BY TS.TYPE
     ,TS.PAGENUM
ORDER BY TS.TYPE
     ,TS.PAGENUM
;

What’s in a tablespace TYPE?

Now at the tablespace level we have, again, the TYPE column which can be Blank for a “normal” table, G for UTS PBG, L for Large, O for LOB, P for XML or R for UTS PBR. Going forward you will have to migrate any of the Blank tablespaces to PBG/PBR and L should be gone anyway!

My Output

When I run these here in Düsseldorf in my little Db2 12 FL508 test system I get this output:

---------+---------+---------+---------+---------+---------+--
INDEX_PARTS                       KB           MB           GB
---------+---------+---------+---------+---------+---------+--
       4453  +0.5951116000000000E+07         5811            5
---------+---------+---------+---------+---------+---------+--
TABLE_PARTS                       KB           MB           GB
---------+---------+---------+---------+---------+---------+--
       3733  +0.3255953100000000E+08        31796           31
---------+---------+---------+---------+---------+---------+-
INDEX_PARTS INDEX IX_EXT PAGE                     KB    MB GB     
            TYPE  TYPE   NUM
---------+---------+---------+---------+---------+---------+-
       3280 2            A   +0.5100220000000000E+07  4980  4
         25 2     N      A   +0.1800000000000000E+05    17  0
          3 2     S      A   +0.1224000000000000E+05    11  0
          8 2     V      A   +0.3744000000000000E+04     3  0
         15 D            A   +0.8112000000000000E+04     7  0
         24 P            A   +0.1392000000000000E+05    13  0
       1098 P            R   +0.7948800000000000E+06   776  0
---------+---------+---------+---------+---------+--------
TABLE_PARTS TYPE PAGENUM                     KB     MB  GB
---------+---------+---------+---------+---------+--------
        363      A      +0.5242134000000000E+07   5119   4 
       2094 G    A      +0.2109695400000000E+08  20602  20
          4 L    A      +0.1312000000000000E+04      1   0
        116 O    A      +0.1186235000000000E+07   1158   1
         25 P    A      +0.2952000000000000E+05     28   0
         33 R    A      +0.6932800000000000E+05     67   0
       1098 R    R      +0.4934048000000000E+07   4818   4

So you can see I already have over 1000 UTS PBR RPN table parts in use! Not bad! As a vendor I cannot just migrate everything of course so those 24 Index Controlled Partitioning indexes and the 33 Absolute Partitions aren’t going anywhere soon!

How does your site look? How are your plans for migrating to UTS PBR RPN going?

As always, I would love to hear the results!

TTFN,

Roy Boxwell

2021-05 Soundex and other cool features part 7 for Db2 12

In this, obviously, never ending series of new features, I will roll up all the new ones since my “SOUNDEX and other „cool“ features – Part six All new for Db2 12” newsletter from 2018-08 where I first covered the new stuff in Db2 12.

What was new in FL100

At that level, a new table function called BLOCKING_THREADS was introduced, which is primarily used by the command DISPLAY BLOCKERS to help in looking for bad guys who are ready to BLOCK your change to FLxxx

FL500 Changed Scalars

Two scalar BiF got changed: TRANSLATE and UPPER can now both use UNI_60 as a locale.

FL500 New Scalars

In FL500, we got a nice bunch of new Scalar functions:

GENERATE_UNIQUE_BINARY (like GENERATE_UNIQUE except it returns a BINARY(16) value)
HASH_CRC32
HASH_MD5
HASH_SHA1
HASH_SHA256
PERCENTILE_CONT
PERCENTILE_DISC
WRAP

FL500 Changed Aggregates

One aggregate BiF got updated: ARRAY_AGG can now be used for associative arrays.

FL501?

It brought in just one new Aggregate BiF, which was the very famous LISTAGG. This is famous, as it was the “proof of concept” that Agile actually worked and could function in the wild.

FL502?

Two changes to Scalar BiFs in this release:

GRAPHIC The first argument now accepts numeric data types, including SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE, FLOAT, and DECFLOAT.
VARGRAPHIC The first argument now accepts numeric data types, including SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE, FLOAT, and DECFLOAT.

FL503?

Nothing new for us in that release!

FL504?

It brought in a brand new concept for Db2 on z/OS. The concept of “pass-through BiFs”. These are just checked for valid syntax before being passed on down to the Accelerator that would actually execute them. Obviously, you must be careful about their usage!

CUME_DIST
CUME_DIST (aggregate)
FIRST_VALUE
LAG
LAST_VALUE
LEAD
NTH_VALUE
NTILE
PERCENT_RANK
PERCENT_RANK (aggregate)
RATIO_TO_REPORT
REGEXP_COUNT
REGEXP_INSTR
REGEXP_LIKE
REGEXP_REPLACE
REGEXP_SUBSTR

FL505?

Brought in a whole bunch of encryption and decryption BiFs:

ENCRYPT_DATAKEY converts a block of plain text to a block of encrypted text using a specified algorithm and key label.
DECRYPT_DATAKEY_datatype functions return the block of encrypted text from ENCRYPT_DATAKEY as a block of the datatype specified by the function used. List of datatypes: DECRYPT_DATAKEY_INTEGER, DECRYPT_DATAKEY_BIGINT, DECRYPT_DATAKEY_DECIMAL, DECRYPT_DATAKEY_VARCHAR, DECRYPT_DATAKEY_CLOB, DECRYPT_DATAKEY_VARGRAPHIC, DECRYPT_DATAKEY_DBCLOB, and DECRYPT_DATAKEY_BIT.

FL506?

Here IBM came up with a slew of “alternative names” for existing BiFs and a new method of calling HASH routines. The background here was to make porting to Db2 on z/OS that little bit easier!

CHAR_LENGTH CHARACTER_LENGTH, which returns the length of its argument in the number of string units that are specified
COVAR_POP COVARIANCE or COVAR, which return the population covariance of a set of number pairs
HASH HASH_MD5, HASH_SHA1, or HASH_SHA256, which return the result of applying a hash algorithm to an input argument, depending on the value specified for the second argument for the HASH function:

        0 (default) HASH_MD5 Returns VARBINARY(16) instead of BINARY(16)
        1 HASH_SHA1          Returns VARBINARY(20) instead of BINARY(20)
        2 HASH_SHA256        Returns VARBINARY(32) instead of BINARY(32)

POW POWER, which returns the value of one argument raised to the power of a second argument
RANDOM RAND, which returns a double precision floating-point random number
STRLEFT LEFT, which returns a string that consists of the specified number of leftmost bytes or the specified string units
STRPOS POSSTR, which returns the position of the first occurrence of an argument within another argument
STRRIGHT RIGHT, which returns a string that consists of the specified number of rightmost bytes or specified string units
TO_CLOB CLOB, which returns a CLOB representation of the first argument
TO_TIMESTAMP TIMESTAMP_FORMAT, which returns a timestamp for a character string expression, using a specified format to interpret the string

FL507?

Here IBM development added another slew of “pass through” BiFs:

ADD_DAYS
BTRIM
DAYS_BETWEEN
NEXT_MONTH
REGR_AVGX
REGR_AVGY
REGR_COUNT
REGR_INTERCEPT or REGR_ICPT
REGR_R2
REGR_SLOPE
REGR_SXX
REGR_SXY
REGR_SYY
ROUND_TIMESTAMP if invoked with a date expression

FL508?

None that I have read of yet!

FL509?

None yet…

Naturally I will be keeping this newsletter up-to-date, as necessary.

Any questions or ideas, do not hesitate to drop me a line,

TTFN,

Roy Boxwell

2021-04 PBG Partition Pain

One of the things I dearly love about Db2 is that I never ever stop learning. I have been working with Db2 since V1.3 and I was completely surprised by what I just learnt!

PBG What is it?

For those that do not know, PBG is a Partitioned-by-Growth Universal Tablespace (UTS) that came into existence in Db2 V9 as well as its twin – Partitioned-by-Range (PBR). IBM have repeatedly said that all new developments will focus on tables/indexes etc that are backed by UTS and so deprecated all the others types (Simple, Segmented, Classic Partitioned). When you create a PBG you specify how many partitions it can get to using the MAXPARTITIONS clause and it starts by allocating just the very first partition.

First Part – First Problem

Quickly after GA the first problems with this data model started appearing. First up were the people who used 4096 as MAXPARTITIONS and then started running out of memory as all the internal control blocks were being allocated for 1000’s of partitions that would most probably never exist!

Next problem – Copy problems

Then users of DSN1COPY and other cloning methods hit the next really nasty problem. This was: What happened if in subsystem A you had three partitions but in subsystem B only one and you use DSN1COPY to copy the data? This problem was unsolvable until another new DDL parameter was quickly rolled out: NUMPARTS. This controls how many partitions are physically allocated at DDL creation time. This should naturally *only* be used if you intend to cross-clone datasets where the number of partitions do not match.

Running out of space?

Various people have written monitoring software over the years that intercept IFCIDs from Db2 and audits the data within. Our very own SpaceAssuranceExpert (SAX) does this to warn you if you are about to hit any one of the many Db2 limits of space management. Naturally one of these limits is when you are nearing, or have allocated, the last partition, in other words you are soon reaching or have indeed reached MAXPARTITIONS. This means you are about to run out of partitions and, if in the last partition, SAX monitors how full it is to warn the DBA group of possible -904’s coming up. All well and good so far!

Newest problem – Death by descending key

It then came to my attention that Db2 12 does not, in fact, work that way in one special case. If the DDL has the same number for MAXPARTITIONS and NUMPARTS (So all possible partitions are allocated at creation) and you have a CLUSTER defined index where all columns are also DESCENDING then the following happens:

First data is inserted in partition one and extents are taken as required until the partition is full (So DSSIZE has been reached) then the inserts are targeted at the *last* partition! Once this partition is full it then goes to MAXPARTITIONS – 1 and fills that one up!

This naturally kills any attempt to see how full the dataset is online. Obviously you can do a spacemap analysis to review exactly how much free space is there, or hope that the Real-time Statistics table can help you, but doing this instantly for every IFCID is just crazy!

Is that a must?

Thankfully you can toggle off this very strange behaviour by setting MAXPARTITIONS to one more than NUMPARTS. Then Db2 behaves itself and goes back to 1 , 2 , 3 etc.

An ALTER to the rescue?

But what if you already have such a definition? If you already have MAXPARTITIONS 3 and NUMPARTS 3, what about doing an ALTER like:

ALTER TABLESPACE SAXTDB06.SAXTTS06
  MAXPARTITIONS 4 ;

Nope. Then Db2 gets *very* dumb! It then inserts to partition one until it is full then it skips to the *old* MAXPARTITIONS partition three and fills it up and then goes backwards until partition two is full before then jumping to partition four! Whoever thought this up must have been crazy! In a space with three partitions which is then ALTERed to have a MAXPARTITIONS of four you then fill partitions 1, 3, 2, 4 in that order… In my opinion this is a bug in Db2 as this space should be handled exactly as if it was defined with MAXPARTITIONS 4 and NUMPARTS 3!

What about an Index ALTER?

BINGO! Doing an ALTER on the index like:

ALTER INDEX SAXTEST.SAXTTB06_NEW_INDEX
  NOT CLUSTER ;

Then gets the inserts going “back” into the normal partitioning sequence. All very odd, confusing and misleading!

There must be reason!

I have thought long and hard about “Why?” and I cannot think of one answer that makes *any* sense! If Db2 started inserting in the last partition I could sort of get it – It might reduce the unclustered insert counts – but only doing this *after* filling the first partition just makes it all seem so pointless…and then after an ALTER it just gets worse!

One “pointer” found

While looking around the web for any help I did stumble across this little gem:

The new V12 looping through partition function, can be either ascending or descending order. If the search is in descending order, then it is possible to skip a partition between the original target and last partition of the table space. Therefore, a sanity check of 04E-00C90101 LOC=DSNIDM DSNISGNS ERQUAL50C4 will be encountered, to catch this situation.

In APAR PI68087 from 2016.

I think this undocumented behavior should be either corrected or, even better, removed as soon as possible!

As usual, any questions or comments please feel free to mail me!

TTFN

Roy Boxwell