2018-05 Audit 2.0 – GDPR Audit guide and checklist for Db2 z/OS

Or: How I Learned to Stop Worrying and Love the Auditor

A guideline in 5 big Steps to Audit for Personal Data Protection

First up, this is not another Audit review and health check sell! We, and I mean especially in the European Union (EU), are about to get the biggest update to “my data”, “my right to know” and “my right to forget (delete)” that has ever happened, when the General Data Protection Regulation (GDPRKicks off on May 25th, 2018.

Who cares?

Well, we all should really! Very soon hordes of people could well start demanding to know if you have any data about them and if so where it is and who you give it to. Personal Data will be big business!

Age of Consent

Just assuming all is ok is also no longer valid or lawful – any personal data that you keep must be kept with the *active* consent of the data owner… This is going to get awfully messy awfully quickly! Do not forget that the definition of personal data has also changed quite a bit e.g. IP Addresses. For the record here is the actual text in the GDPR regulations:

Art. 4 GDPR Definitions

For the purposes of this Regulation:

1.      ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

And

Art. 9 GDPR Processing of special categories of personal data

1.      Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

Security by design and Security of processing

Very important are Articles 25 and 32:

Art. 25 GDPR Data protection by design and by default

1.       Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.

2.       The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.

Last part of Paragraph two is important here.

Art. 32 GDPR Security of processing

1.      Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

1.      the pseudonymisation and encryption of personal data;

2.      the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;

3.      the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;

4.      a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

2.      In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.

Here paragraphs 1.2, 1.4 and 2 are the biggies!

A small or large fine for you today?

The fines are also amazingly high. First, for the “minor” problem of being over 72 hours late when data leaks have occurred (a breach), is 2% of global turnover or €10,000,000 – whichever is *higher* and, if you are really naughty, like disregarding basic data laws, moving data abroad or ignoring an individual’s rights then you get hit for 4% of global turnover or €20,000,000 – again whichever is *higher*

First up?

I, for one, do not want to be the first company that makes the headlines… How about you?

Who are you afraid of?

Now I know that most of the talk about GDPR is “right to data” for the general public but we all know, as IT specialists, that the people to be really afraid of are ex-employees who did not leave on happy terms. They know *exactly* where the knife could best be put… They know exactly where, and on which platforms, all of the data really is.

Protect yourself – Due diligence

What can you do? Well the first thing is to actually understand what GDPR means to you and your firm’s data. Then you must be able to prove to auditors that you tried your very best. We will all probably get hacked at some time, but if you tried your best it is enough. Using all of the data you gather in a post-mortem, or for forensics, is also a very good idea as can be seen in paragraphs 2.3 and 2.4 below:

Art. 83 GDPR General conditions for imposing administrative fines

1. Each supervisory authority shall ensure that the imposition of administrative fines pursuant to this Article in respect of infringements of this Regulation referred to in paragraphs 4, 5 and 6 shall in each individual case be effective, proportionate and dissuasive.

 2. Administrative fines shall, depending on the circumstances of each individual case, be imposed in addition to, or instead of, measures referred to in points (a) to (h) and (j) of Article 58(2). When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following:

1. the nature, gravity and duration of the infringement taking into account the nature scope or purpose of the processing concerned as well as the number of data subjects affected and the level of damage suffered by them;

2. the intentional or negligent character of the infringement;

3. any action taken by the controller or processor to mitigate the damage suffered by data subjects;

4. the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented by them pursuant to Articles 25 and 32;

The list goes on after these of course…

Db2 Audit – a new way?

Most sites use SMF and do daily cuts to then offload to a repository system of some sort on some sort of hardware. The problem here is that the amount of SMF data you are generating can swamp you and this method is not nearly quick enough. Waiting a day is 24 hours of the 72 that are available to you…

Faster, Better, Securer?

Can you do it faster?  Can you do it yourself faster?  Can you do it better? Maybe…

 

If you can handle OPx and a bit of High Level Assembler then “Bob’s your uncle!” as long as Bob is your Auditor of course…There are more than enough examples in the web about how to write and do OPx reading (There are even vendors that are willing to sell you their software 🙂 )


A guideline in 5 big steps to Audit in a new way for Personal Data protection :


1 – To do list – IFI commands, IFCIDS & Audit Class

Create a program that can issue the required IFI commands to start and stop traces and to issue the READA and READS IFI calls that you wish to have. If you don’t know which Audit Class is which IFCID – Here’s my handy list:

Audit ClassIFCIDsWhatWhy
1140Authorization failuresPossible brute force password attack
2141GRANTs and REVOKEsPossible temporary raising of privilege
3142CREATE/ALTER/DROP of table with AUDIT attributeRemoving the AUDIT attribute is a big red flag
41431st Change of table with AUDIT attributeSensitive data updates
51441st Select of table with AUDIT attributeSensitive data usage
6145BIND/PREPARE using table with AUDIT attributePossible usage of sensitive data
755, 83, 87, 169, 319SET CURRENT SQLID, end of identify, end of signon CICS/IMSElevation of privilege
823, 24, 25, 219, 220Db2 UtilitiesUNLOADing data to where?
9146, 392For customer usen/a
10269, 270, 271*

 

271 is not started automatically with Class 10

Establish reuse trusted context, trusted context CREATE/ALTER and Column Mask/Row Permission CREATE/DROP/ALTERAnything with MASKs or ROW Permissions must be checked
11361Successful accessOnly makes sense if working with AUDIT POLICY SYSADMIN or DBADMIN

So, what is missing from this picture?

Well, what about the COMMANDs to actually see what people are issuing on the machine? So you must also add IFCIDs 90 and 91. Then you do not see any of the DDL so you must start IFCID 62 to get that as you do want to see what people are ALTERing, creating and dropping don’t you?


2 – All done in design?

Once you have a system that gathers all these you must buffer them in memory and then do one of two things:

1) Write them out to a file for post-processing and enrichment later
2) Directly write them to your SIEM system of choice

I am not a fan of number 2 because if the SIEM system cannot accept the data then your data is lost…

With No. 1 you can keep processing even if the SIEM system cannot ingest, for whatever reason, and as long as you trigger yourself every 5 – 10 minutes or when the OPx buffer is full then that is “real time” enough for me!


3 – Care for something else for the weekend?


Plus you can enrich the data with extra stuff that the naked IFCIDs do not have attached to them e.g. map dbid/psid to a real database and tablespace etc.


Store all of this data in a bunch of Db2 tables


then kick-off a Db2 to a LEEF re-formatter program that reads all the audit data that has been collected since last time


writes it all out.


4 – Code Page – don’t forget the Code Page

Then you must do an ICONV for code page conversion, swiftly followed by a gzip to get it down to size and then copy it down to a USS file ready for your SIEM system to ingest its prey.


5 – Done, Finished and Complete

To prove that this system is always available you should also schedule a simple

GRANT SELECT ON SYSIBM.SYSDUMMY1 TO PUBLIC;

Every morning at 06:00 which you should see in the output every day. Remember – Due Diligence!


 

So now you are done and finished and can let the SIEM system do the completion!

 

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

PS: Please note that the bold of parts of the GDPR text is only from me!

2016-08 – Is it Safe? How to recover accidently dropped tables

Do you have the DDL anywhere?
Was there a “recent” image copy or disk back-up? Who can you call for help?

The newsletter title this month is really nothing to do with the film “Marathon Man”, but sometimes backup and recovery can feel just like having your teeth drilled… Anyway, the title is actually meant to get you to think again about your site’s back-up and recovery definitions—specifically that age-old chestnut about “accidently” dropped objects.

RECOVER from DROP?

I saw in LISTSERV a discussion about recovering from dropped tables and tablespaces. It is a pretty horrible situation when you realize that you just confirmed the drop of a test table to then suddenly see that, as your finger is descending towards the ENTER key, that it is, in fact, a different name…
recover accidently dropped db2 zos table based on DDL extraction

DDL and Back-ups Handy?

Normally, at this point, the air is filled with colorful language and interesting local metaphors. Once it has calmed down a bit the real work starts: Do you have the DDL anywhere? Was there a “recent” image copy, or a disk back-up? Who can you call for help?

Mirror Mirror on the wall

Remember that mirroring etc. will not help you as the DROP has also been successfully mirrored. So within a moment, the data was also dropped at your disaster recovery site. (Argh!)

Newbies then start looking in SYSIBM.SYSCOPY for the last image copies, while us grey-haired oldies more mature experts start looking in production control copy libraries and BETA92!

The heat is on!

At this point a couple of things happen: The telephone starts ringing and a manager-type person materializes to ask annoying questions all the time. Typically: “How long will it take to get the data back?”

Now you have two possibilities:

    1. You are in luck! Someone somewhere extracted all the DDL for the table(s) with DBID, PSID, and OBID(s) so you could generate a DSN1COPY job from the last found Image Copy dataset

recover accidently dropped db2 zos table based on DDL extraction

2. You are *not* in luck! You have no idea how the table looks, and you cannot see if anyone ALTERed it in the last five years or so.

Crashed and Burned

If you are in position 2, it is now a good time to update your CV and make sure your desk is neat and tidy… Now you might have access to some nifty third party vendor tools, but for that you must at least have the dataset name of the last Image Copy and—of course—the third party tool itself! Or the ability to read the DB2 log and “resurrect” the table defs from there—Not a pretty place to go I assure you!

Back up that chain of thought for a Moment

So let’s rewind and imagine that you are doing this all differently… What about beginning today with an extract of all the DDL on your system? Then capture all of the IFCID 62’s that show any DDL changes. Going further: what about getting all of the IFCID 220’s to get dataset allocations correlated with Utility IDs and DBIDs and PSIDs.

Imagine what you could then do?

Wow! Cavalry over the hill

Yep, you have a ”history” of all the DDL that has happened on your machine right up to the point when your object was DROPed *and* you have the dataset name(s) of the last image copies, as well as all the internal IDs to enable a successful DSN1COPY job complete with OBID translation! Cool huh? So suddenly you are now a hero instead of a villain!
recover accidently dropped db2 zos table based on DDL extraction

recover accidently dropped db2 zos table based on DDL extraction

DIY or Buy in?

So much for the theory – What about in practice? Well you can write it all yourself, or you could use this as a sort of useful side effect from our WorkloadExpert (WLX) software, which has all this built in! We already get all these IFCIDs, we already have a DDL Generator bundled with the WLX software for the Audit Use Cases, and so it really kills two birds with one stone!

Restricted movement?

Now, of course, you could argue:“Wait! I have RESTRICT ON DROP set for all my productive tables!” Now this works really well for accidental drops, but I have seen lots of places where it should be used but was in fact forgotten. How can you check? Run this little SQL to validate that what you think is true really is true:

recover accidently dropped db2 zos table based on DDL extraction

SELECT COUNT(*) AS TABLES                                     
       ,COUNT(CASE WHEN CLUSTERTYPE = 'Y' THEN 1 ELSE NULL END) 
                 AS DROP_RESTRICT                               
       ,COUNT(CASE WHEN CLUSTERTYPE = ' ' THEN 1 ELSE NULL END) 
                 AS NOT_DROP_RESTRICT                           
FROM SYSIBM.SYSTABLES                                         
WHERE     TYPE   IN ('T' , 'M' , 'H' , 'R' )

  AND NOT DBNAME IN ('DSNDB01' , 'DSNDB06' )

WITH UR
 ;

 

I hope the results do‘nt have you feeling like Dustin Hoffman in the movie .

As usual any questions or comments are welcome,

TTFN Roy Boxwell

2016-05 What’s in an Acronym? ICI

Do you know the difference between BIF & ICI?

Release migration: Do you know which role ICIs are playing in the Db2 compatibility?

Just over a year ago I did a little newsletter all about IFICDs 366 and 376, what I call BIFCIDs, that report usage of a function that has changed or a function that works now, but will either fail or work differently, at some point in the future, e.g.: after a Db2 release migration.

When is a BiF not a BiF?

Now, back then I labelled these guys “BIFCIDs” and asked: “Where’s the BiF?” I see now that the name I chose was actually pretty inaccurate! If you take the time to study the macro versions of 366 and 376 (all described in the SDSNMACS library member DSNDQW05) you see that from the 22 IFCIDs only six are actually for a Built-in Function “problem” – all the rest are really ICIs.

What’s an ICI? It’s a TLA (another nice acronym), from Db2 development and stands for “Incompatible Change Indicator”. A really catchy name for a really nasty problem! So let us review the IFCID details, but now from the perspective of “Are you a BIFCID or are you an ICI?”

 

The details for BIFCIDs – die BIFs der IFCIDS list

***********************************************************************
**  QW0366FN = 1                                                      *
**    Indicates that the pre Version 10 CHAR built-in function has    *
**    been invoked. There is an incompatible change to the output of  *
**    the CHAR function for some decimal data. The zparm              *
**    BIF_COMPATIBILITY and/or the SYSCOMPAT_V9 schema have been used *
**    by this application to get the old behavior. Please make the    *
**    appropriate changes and rebind with the SYSCURRENT schema to    *
**    use the Version 10 CHAR(decimal) built-in function.             *
**    (PM29124 V10 only, usermod V8/V9)                               *
**  QW0366FN = 2                                                      *
**    Indicates that the pre Version 10 VARCHAR built-in function or  *
**    CAST(decimal AS CHAR or VARCHAR) has been invoked.              *
**  QW0366FN = 10                                                     *
**   RTRIM, LTRIM or STRIP version 9 being used with mixed data       *
**  QW0366FN = 1109                                                   *
**    Indicates that CAST(string AS TIMESTAMP) was processed for the  *
**    input string of length 8 and input was treated as a store clock *
**    value (or input string was of length 13 and was treated as a    *
**    GENERATE_UNIQUE value). This behavior is incorrect for a CAST   *
**    and is valid for TIMESTAMP built-in function only. This behavior*
**    is being corrected in Db2 11 so that input to CAST is not       *
**    treated as a store clock value nor GENERATE_UNIQUE.             *
**  QW0366FN = 1110                                                   *
**    Indicates the integer argument of SPACE function is greater     *
**    than 32764.                                                     *
**  QW0366FN = 1111                                                   *
**    Indicates the optional integer argument of VARCHAR function     *
**    has a value greater than 32764.                                 *
***********************************************************************

 

IFCIDS list follow: the details for ICIs

***********************************************************************
**  QW0366FN = 3                                                      *
**    Indicates that an unsupported character representation of a     *
**    timestamp string was used. PM48741 V10 only.                    *
**  QW0366FN = 4                                                      *
**   A QW0366FN 4 record indicates that the statement uses the        *
**    word ARRAY_EXISTS as an unqualified user-defined function name  *
**    in a context that may be incompatible with Version 11.          *
**  QW0366FN = 5                                                      *
**    A QW0366FN 5 record indicates that the statement uses the       *
**    word CUBE as an unqualified user-defined function name          *
**    in a context that may be incompatible with Version 11.          *
**  QW0366FN = 6                                                      *
**   A QW0366FN 6 record indicates that the statement uses the        *
**    word ROLLUP as an unqualified user-defined function name        *
**    in a context that may be incompatible with Version 11.          *
**  QW0366FN = 7                                                      *
**   A QW0366FN 7 record indicates that Db2 for z/OS server issued    *
**   a SQLCODE -301 for incompatible data type conversion from        *
**   string data type (e.g. CHAR, VARCHAR, GRAPHIC, VARGRAPHIC        *
**   etc.) to numeric data type in V10 CM mode when implicit          *
**   cast is not supported or V10 NFM mode when DDF_COMPATIBILITY     *
**   zparm is set to DISABLE_IMPCAST_NJV or SP_PARMS_NJV to           *
**   disable implicit cast, and the client is CLI driver              *
**   or v11 NFM mode & APPLCOMPAT = V10R1 when DDF_COMPATIBILITY      *
**   is set to SP_PARMS_NJV or DISABLE_IMPCAST_NJV to disable         *
**   implicit cast either from string data type to numeric or         *
**   from numeric data type to string data type.                      *
**  QW0366FN = 8                                                      *
**   A QW0366FN 8 record indicates that Db2 for z/OS server           *
**   returned output data match the data types of the                 *
**   corresponding CALL statement arguments when DDF_COMPATIBILITY    *
**   zparm is set to SP_PARMS_NJV.                                    *
**                                                                    *
**  QW0366FN = 9                                                      *
**    A QW0366FN 9 record indicates a data type conversion from       *
**    a TIMESTAMP WITH TIME ZONE input to a TIMESTAMP data            *
**    during input host variable bind-in process on server when       *
**    DDF_COMPATIBILITY zparm is set to IGNORE_TZ to ignore the       *
**    time zone information sent by Java IBM Data Server Driver.      *
**  QW0366FN = 1101                                                   *
**    Indicates that the INSERT statement that inserts into an XML    *
**    column without XMLDOCUMENT function has been processed (which   *
**    should result in SQLCODE -20345 when run on Db2 release prior   *
**    to V11). Starting with V11, SQL error will no longer be issued. *
**    Application will no longer recieve SQLCODE for this statement.  *
**  QW0366FN = 1102                                                   *
**    Indicates that V10 XPath evaluation behavior was in effect which*
**    resulted in an error. For instance, a data type conversion error*
**    could have occured for a predicate that would otherwise be      *
**    evaluated to false. Starting from V11, such "irrelevant" errors *
**    might be suppressed so an application might no longer recieve   *
**    the SQLCODE for this statement.                                 *
**  QW0366FN = 1103                                                   *
**    Indicates that a dynamic SQL uses the ASUTime limit that has    *
**    been set for the entire thread for RLF reactive governing.      *
**    For instance, when a dynamic SQL is processed from package A,   *
**    if the ASUTime limit is already set during other dynamic SQL    *
**    processing from package B in the same thread, the SQL from      *
**    package A will use the ASUTime limit set during the SQL         *
**    processing from package B. Stating with v11, dynamic SQLs from  *
**    multiple packages will use the ASUTime limit that is set        *
**    considering its own package information.                        *
**  QW0366FN = 1104, 1105, 1106, 1107                                 *
**    Indicates that CLIENT special register (CLIENT_USERID,          *
**    CLIENT_WRKSTNNAME, CLIENT_APPLNAME, CLIENT_ACCTNG) has been set *
**    to a value that is longer than what is supported prior to V11.  *
**    A shorter value has been used instead.                          *
**  QW0366FN = 1108                                                   *
**    Indicates that CLIENT special register (CLIENT_USERID,          *
**    CLIENT_WRKSTNNAME, CLIENT_APPLNAME, CLIENT_ACCTNG) has been set *
**    to a value that is longer than what is supported prior to V11.  *
**    Truncated values upto the supported lengths prior to v11 have   *
**    been used for RLF table search instead.                         *
**  QW0366FN = 1112                                                   *
**   Indicates the empty XML element is serialized to <X></X>         *
**   instead of <X/>.                                                 *
***********************************************************************

What does this all mean?

You might well be wondering: “What’s the point?” Well, the point is to try and get the ICI’s acknowledged in your company and to start checking for them now. Why do I say this? Because the APPLCOMPAT is only going to work for two down levels. What that means is that APPLCOMPAT at Db2 10 is good for Db2 11 and Db2 12 but *dies* horribly at Db2 12 +1 which, if IBM stick to their track record of Db2 delivery dates, will be October 2019. Not that far away in the future is it?

 

The BiF problem is fixed – long live the ICI Problem!

OK, you might have traced and tracked your BIFs, but what about the ICIs? At one firm I visited, all the BIFCIDs were clean until this February which, as a leap year, had 29 days. This caused different programs to run and…Tra La! New BIFCIDs came creeping out of the woodwork…along with some new ICIs…

 

Freeware to the rescue

To help you save time, just download and test our little BiF Freeware, which tracks down all ICIs and BiFs for you. Register, download, and then see where your system stands in the ICI rankings. We have also changed our Use Case “BIF usage” in WorkloadExpert to be only for BiFs and have introduced a new Use Case called “BIF/ICI occurrences” which shows them all.

As usual, any question or comments gladly welcome!

 

TTFN

Roy Boxwell

GIVE and TAKE Program

1,2,3


Give and Take 2020

Information on the Give and Take Programs 4,5,6,7


Previous Give & Take

We have “GIVEn” various free-of-charge Use Cases from our SQL WorkloadExpert for Db2 z/OS like:

1 Index Maintenance Costs

2 EXPLAIN Suppression

3 BIF Usage

Limited free-of-Charge Db2 Application

This Program started in Europe, during our 30th anniversary was such a success, that it is now being Extended for the benefit of North American Db2 z/OS sites.

<a href="http://www.seg.de/produkte/db2-zos-produkte/sql-workloadexpert-for-db2-zos/" target="


Index Maintenance Costs, EXPLAIN Suppression, BIF

Limited free-of-Charge Db2 Application

This Program started in Europe, during our 30th anniversary was such a success, that it is now being Extended for the benefit of North American Db2 z/OS sites.

SQL WorkloadExpert for Db2 z/OS (WLX) contains several “Use Cases”. We provided three of them, free of charge, for one month to different sites. In return, we received their results. We’d like to share this with you now.

We have “GIVEn” various free-of-charge Use Cases from SQL Workload Expert for Db2 z/OS like
1  Index Maintenance Costs
2  EXPLAIN Suppression
3  BIF Usage 
BIX HealthCheck – This last one is still available
We TAKE the anonymized results for research
and will communicate with the local User Groups for discussions

Inspiring experiences

Customer Statements

3BIF USAGE
News
Read the Customer Comments across the Industry 

 

  • Health Care
  • Insurance
  • Banking
  • Car Manufacturing

First results from Db2 z/OS sites

1Index Mantenance CostsNearly all the data we got back showed a positive result for created Indexes…
2EXPLAIN SuppressionAbout 10% of SQLs are actually “left over”…
3 BIF Usage When migrating to a new Db2 version, the BIFs are not always compatible and an extreme amount of data is produced.

 

The difficulty of near-time analysis to track down BIFs within dynamic SQL have been solved with the BIF Usage Use Case…

 [Results from Db2 z/OS sites]

Program 3 – BIF Usage –  has now started

BIF-Usage

 

Presentation

BIF CompatibilityDb2 10 compatibility mode
Changes to the STRING formating of decimal data within the CHAR and VARCHAR built-in function and to the CAST specification with CHAR and VARCHAR result types as well as  UNSUPPORTED TIMESTAMP STRINGs.
White PaperFinding BIFsAnd How to Lead a Problem-Free Life With Them in the Future
Navigating the Challenges of moving to a new Db2 Release
Newsletter2015-01 – BIFCIDS – Where’s the BIF?How will you deal with loop-hole usage in production code?
VideoBIF Usage(11min.) Trap  and correct the BIFs that will cause belly-ache one day soon

BIF Usage video


2015-03: DB2 z/OS object changes: Quiet Times for maintenance

Do you have an idea when tables are in use?

 

Ahhh! Wouldn’t it be great if we all had just quiet times? Sadly we never have time for anything these days, let alone for peace and quiet!

The quiet before the Storm?

What I mean by Quiet Times is, however, different: it is the time when a given table, or set of tables, is not in use. This is very interesting to find out, especially when you are doing data definition changes (DDL). For example: you are given the task of adding some columns to some tables – naturally these days you have no idea who or what is actually using the tables, and absolutely no idea *when* they are being used.

What do you do?

Well, all you can do is schedule the change for early one morning and then quickly push the ALTERs and the REORGs through – hoping not to collide with any users of the data.

 

Guessing when tables are in use can be dangerous

This is all a bit haphazard and dangerous! Wouldn’t it be better if you could look at a calendar and see that this table is only used Mo – Th from 09:00 – 16:00 thus giving you a really big hint that Friday morning is a better bet?

 

Capture your DB2 SQL Workload & project the results into a Calendar view

Using the new and enhanced IFCIDs in DB2 10 you can now do this! Capture your workload and analyze when table(s) are being used and project the results into a Calendar view:

News from the labs Newsletter 2015-03: Quiet Times

 

Gives this style Output:

News from the labs Newsletter 2015-03: Quiet Times

 

Handy huh?

Video (3 min.)  – Presentation

– You can drag the dates back and forth to validate the assumptions of a period of time, and then you can happily do your ALTERs and REORGs during the day.

– Apart from not having to get up early, the added bonus is that you get to learn more about who uses the tables!

Of course this system is *not* a crystal ball! It is just showing historical usage. Who knows what the future holds?

Would this style of output be useful for you? Could you imagine this helping you in your day-to-day tasks?

 

As usual any queries or criticism gladly accepted!

TTFN,

Roy Boxwell

2015-01 BIFCIDS – Where’s the BIF?

How will you deal with loop-hole usage in production code?

 

 

The IFCIDs 366 and 376

DB2 provides many and varied IFCIDs. But for today, I’m most interested in the 366 and 376. The 366 is available in DB2 10 and the 376 in DB2 11. Now I like to call these “BIFCIDs” because they are triggered whenever a BIF is used that will behave differently than it is currently used when moving to the next release of DB2. (It’s also triggered when changing Application Compatibility settings in DB2 11 and higher).

 

So where’s the BIF?

BIF Usage Video (11min:)       Presentation

Well, a BIF is a Built-In Function such as CHAR, DECIMAL, etc. There are hundreds of them these days. In the last few DB2 releases, IBM has changed a few to make DB2 more compatible with SQL standards. They have actually closed a couple of loop-holes, where “bad” data could be accepted and processed.

 

Loop-hole user?

What happens is: someone somewhere found this loop-hole and used it in production code. Now when you upgrade your DB2, this code will either fail or give erroneous results – which is never good. Hence IBM created the IFCID 366. This is output every time an SQL statement is PREPARED, or executed, that contains a candidate BIF. There were so many of these, that IBM introduced a sort of condensed version so it only triggered one for the first execution, or prepare, but sadly that IFCID—376—is only for DB2 11.

Where can, or will, this really hurt?

 

Looking into the documentation for these IFCIDs you will see a long list of when they are written:

***********************************************************************
**  IFCID 0366 is a serviceability trace.                            **
**  It can be used to identify applications that are affected        **
**  by incompatible changes.                                         **
**  The QW0366FN field indicates the type of incompatible Change:    **
**                                                                   **                                                      
**  QW0366FN = 1                                                     **
**  Indicates that the pre Version 10 CHAR built-in function has     **
**  been invoked. There is an incompatible change to the output of   **
**  the CHAR function for some decimal data. The zparm               **
**  BIF_COMPATIBILITY and/or the SYSCOMPAT_V9 schema have been used  **
**  by this application to get the old behavior. Please make the     **
**  appropriate changes and rebind with the SYSCURRENT schema to     **
**  use the Version 10 CHAR(decimal) built-in function.              **
**  (PM29124 V10 only, usermod V8/V9)                                **
**                                                                   **
**  QW0366FN = 2                                                     **
**  Indicates that the pre Version 10 VARCHAR built-in function or   **
**  CAST(decimal AS CHAR or VARCHAR) has been invoked.               **
**                                                                   **
**  QW0366FN = 3                                                     ** 
**  Indicates that an unsupported character representation of a      ** 
**  timestamp string was used. PM48741 V10 only.                     **
**                                                                   ** 
**  QW0366FN = 4                                                     ** 
**  A QW0366FN 4 record indicates that the statement uses the        **     
**  word ARRAY_EXISTS as an unqualified user-defined function Name   **   
**  in a context that may be incompatible with Version 11.           **     
**                                                                   **
**  QW0366FN = 5                                                     ** 
**  A QW0366FN 5 record indicates that the statement uses the        **
**  word CUBE as an unqualified user-defined function Name           **
**  in a context that may be incompatible with Version 11.           **
**                                                                   **
**  QW0366FN = 6                                                     **
**  A QW0366FN 6 record indicates that the statement uses the        **
**  word ROLLUP as an unqualified user-defined function Name         **
**  in a context that may be incompatible with Version 11.           **
**                                                                   ** 
**  QW0366FN = 7                                                     **
**  A QW0366FN 7 record indicates that DB2 for z/OS server issued    **
**  a SQLCODE -301 for incompatible data type conversion from        **
**  string data type (e.g. CHAR, VARCHAR, GRAPHIC, VARGRAPHIC        **
**  etc.) to numeric data type in V10 CM mode when implicit          **
**  cast is not supported or V10 NFM mode when DDF_COMPATIBILITY     **
**  zparm is set to DISABLE_IMPCAST_NJV or SP_PARMS_NJV to           **
**  disable implicit cast, and the client is CLI Driver              **
**  or v11 NFM mode & APPLCOMPAT = V10R1 when DDF_COMPATIBILITY      **
**  is set to SP_PARMS_NJV or DISABLE_IMPCAST_NJV to disable         **
**  implicit cast either from string data type to numeric or         ** 
**  from numeric data type to string data type.                      **
**                                                                   **
**  QW0366FN = 8                                                     **
**  A QW0366FN 8 record indicates that DB2 for z/OS server           **
**  returned output data match the data types of the                 **
**  corresponding CALL statement arguments when DDF_COMPATIBILITY    **
**  zparm is set to SP_PARMS_NJV.                                    **
**                                                                   **
**  QW0366FN = 9                                                     **
**  A QW0366FN 9 record indicates a data type conversion from        **
**  a TIMESTAMP WITH TIME ZONE input to a TIMESTAMP data             **
**  during input host variable bind-in process on server when        **
**  DDF_COMPATIBILITY zparm is set to IGNORE_TZ to ignore the        **
**  time zone information sent by Java IBM Data Server Driver.       **
**                                                                   **
**  QW0366FN = 10                                                    ** 
**  RTRIM, LTRIM or STRIP version 9 being used with mixed data       **
**                                                                   **
**  QW0366FN = 1101                                                  ** 
**  Indicates that the INSERT statement that inserts into an XML     **
**  column without XMLDOCUMENT function has been processed (which    **
**  should result in SQLCODE -20345 when run on DB2 release prior    **
**  to V11). Starting with V11, SQL error will no longer be issued.  **
**  Application will no longer recieve SQLCODE for this Statement.   **
**                                                                   ** 
**  QW0366FN = 1102                                                  **
**  Indicates that V10 XPath evaluation behavior was in effect which **
**  resulted in an error. For instance, a data type conversion error **
**  could have occured for a predicate that would otherwise be       **
**  evaluated to false. Starting from V11, such "irrelevant" Errors  **
**  might be suppressed so an application might no longer recieve    **
**  the SQLCODE for this Statement.                                  **
**                                                                   **
**  QW0366FN = 1103                                                  **
**  Indicates that a dynamic SQL uses the ASUTime limit that has     **
**  been set for the entire thread for RLF reactive governing.       **
**  For instance, when a dynamic SQL is processed from package A,    **
** if the ASUTime limit is already set during other dynamic SQL      ** 
** processing from package B in the same thread, the SQL from        **
** package A will use the ASUTime limit set during the SQL           **
** processing from package B. Stating with v11, dynamic SQLs from    **
** multiple packages will use the ASUTime limit that is set          **
** considering its own package information.                          **
**                                                                   **
** QW0366FN = 1104, 1105, 1106, 1107                                 **
** Indicates that CLIENT special register (CLIENT_USERID,            **
** CLIENT_WRKSTNNAME, CLIENT_APPLNAME, CLIENT_ACCTNG) has been set   **
** to a value that is longer than what is supported prior to V11.    **
** A shorter value has been used instead.                            **
**                                                                   **
** QW0366FN = 1108                                                   **
** Indicates that CLIENT special register (CLIENT_USERID,            **
** CLIENT_WRKSTNNAME, CLIENT_APPLNAME, CLIENT_ACCTNG) has been set   **
** to a value that is longer than what is supported prior to V11.    **
** Truncated values upto the supported lengths prior to v11 have     **
** been used for RLF table search instead.                           **
**                                                                   **
** QW0366FN = 1109                                                   **
** Indicates that CAST(string AS TIMESTAMP) was processed for the    **
** input string of length 8 and input was treated as a store clock   **
** value (or input string was of length 13 and was treated as a      **
** GENERATE_UNIQUE value). This behavior is incorrect for a CAST     **
** and is valid for TIMESTAMP built-in function only. This behavior  **
** is being corrected in DB2 11 so that input to CAST is not         **
** treated as a store clock value nor GENERATE_UNIQUE.               **
**                                                                   **
** QW0366FN = 1110                                                   **
** Indicates the integer argument of SPACE function is greater       **
** than 32764.                                                       **
**                                                                   **
** QW0366FN = 1111                                                   **
** Indicates the optional integer argument of VARCHAR function       **
** has a value greater than 32764. *                                 **
***********************************************************************

 

Useful stuff indeed!

Phew! Not a bad list, huh? Now you see why these IFCIDs are so useful. It could well be, that you have none of these “alive” in your system today. Or, of course, it could be that you get millions of the things! Somehow you will have to work out a way to save the data, analyse it to get to the root cause, and then, finally, fix the problem(s).

 

Saved by APPLCOMPAT?

You could argue that the new DB2 11 parameter Application Compatibility will save you, but this is really a false economy. All it enables is the guarantee that the code will still “run”. However, in two more DB2 releases the code will fail and, in two more releases – so about six years – who will even know *how* to change which piece of source code and, perhaps even, where is that source code?

 

Saved by BIFCIDs

Personally, what I would do, is : to run our SQL WorkloadExpert tool to trap all the required [B]IFCIDs for a few hours (at first!).Then I would analyse the results, fix the code where it needs fixing – and repeat! I would keep doing this until no IFCID records are coming out and I would be set!

BIF Usage Video (11min:)       Presentation

What is even better, is that our SQL WorkloadExpert will work correctly even when any new QW0366FN values appear – so when IBM decides to add another code (Like the new values 9 and 10 above for example) this BIF Usage still works correctly.

 

Of course, you may have another tool that you use at your site.

Can it see “Where’s the BIF?”

How will you deal with loop-hole usage in production code?

 

As usual, any question or comments gladly welcome!

 

TTFN

Roy Boxwell

 

 

2015-02: DB2 z/OS AUDIT – Boring Boring Boring

 

New IFCIDs 316 – 400/401 for DB2 z/OS Audit “on the fly”

I use the Monty Python title as it reminded me of an old telephone book joke in England: look up „boring“ in the Yellow pages – there you would find a little bit of text that simply said „See Civil Engineering.“ It still makes me laugh these days.  Anyway, back to the newsletter…

Auditing is often looked down upon as being boring, tedious and of no worth. This is rubbish of course! Without auditing we would not be allowed to do anything these days. I hardly know any DBAs who still have SYSADM.

It is just too darn powerful!

The statistics also tell us that nearly all “hack” attacks are “inside jobs” from the very people we know and trust. The statistics are also pretty brutal on the Mainframe/Server divide – Mainframes are very rarely successfully attacked, but Servers (see Sony et al.) are all the time. I think there should be a lot more auditing on the little iron really!

 

So how do you audit on the host side of the street?

Do you actively check what is happening? Or do you just wait for the thought police to arrive?

 

Enabling Auditing from DB2 10

When the enhanced IFCID 316 and the new IFCIDs 400/401 were introduced, it closed a gap in the ability to actually Audit your system “on the fly”. Using these IFCIDs  you could actually trap/monitor/audit all of the SQL running in your Plex. These IFCIDs are also nearly free as the overhead is “background noise” levels of CPU.

Now, let us imagine that you are capturing all of this data. That you are regularly snapping both the DSC and the SSC, (that’s what I call the EDMPOOL cache for Static SQL statements), that this data is all being rolled up and saved into a DB2 Data warehouse, and that you are triggering Batch jobs to analyse for Audit – reporting using various queries just to see if anything “untoward” is starting/or is happening!

 

What queries would you want to run?

– I have a few straight off the bat here: Who is reading from the Payroll table?
– Who is updating the Payroll or Employee tables?
– Who is accessing *any* table from the internet?
– Is anybody being really clever and using ODBC to select from my production tables?
– How many userids are out there using my data?
– Has any SYSADM enabled userid done any work on my system today?

All good Audit questions that you could put into operation very simply indeed!

 

Oh Lucky Man!

Now as luck would have it, we have a software product called SQL WorkloadExpert that actually does all this for you! What’s more you can expand it as much as you like! Cool huh?

– Who is accessing *any* table from the Internet?…
– Has any SYSADM enabled userid done any work on my system today?…
– How many userids are out there using my data?…

Audit Video (5 min.)   –  Presentation

Looking at this screen shot you can get an idea of the possibilities – Look at the Workstation name column for instance. “192.xxx” is the intranet. If any other tcp/ip address showed up here, it would be, shall we say, “worrying”. You can also see great stuff like “EXCEL.EXE” in the Transaction name column, and that a certain Mr. Boxwell has been running a few things from lots of data sources, I wonder what he’s up to?

News from the labs Newsletter 2015-02: DB2 AUDIT-Boring, Boring, Boring-Screenshot1

 

Here you can see a nice list of “Intents” against a given table (in this case SYSIBM.SYSTABLES)
News from the labs Newsletter 2015-02: DB2 AUDIT-Boring, Boring, Boring-Screenshot1

 

Now you can see who did what type of insert against a given object.
News from the labs Newsletter 2015-02: DB2 AUDIT-Boring, Boring, Boring-Screenshot1

 

This is a list of *all* Primary Authorization IDs or Collections and Packages that have run
– Any intruders?
News from the labs Newsletter 2015-02: DB2 AUDIT-Boring, Boring, Boring-Screenshot1

 

This is a list of all User Data Updates done by users with SYSADM authority in the last workload.
News from the labs Newsletter 2015-02: DB2 AUDIT-Boring, Boring, Boring-Screenshot1

 

Triggering the Auditor

All of this with drill down to the actual SQL that was executed. Cool stuff and very handy indeed! But this is “past the point” and what you really need is a Batch Style interface that runs the SQLs and triggers alarms on the host before someone looks at the pretty GUI!

I would do this with a nice little set of batch Spufi’s that get post processed and either e-mailed directly to the Auditors, or WTO’d  as an alarm action that then triggers a batch job to do something else.

 

What would you like to Audit? Or what would your Auditors like to see? How do you currently accomplish this?

I would be fascinated to hear from you!

TTFN,

Roy Boxwell