SlideShare a Scribd company logo
1 of 39
Download to read offline
IBM System z Technical University – Vienna , Austria – May 2-6




zZS26 DB2 Data Sharing Performance For Beginners
Martin Packer




                                                                 © 2011 IBM Corporation
IBM System z Technical University – Vienna , Austria – May 2-6




Abstract

         This presentation provides an introductory-level view of how to
          look at the DB2 Data Sharing performance numbers
            – from both a z/OS / RMF and a DB2 perspective

         Performance topics include
           – XCF
           – Coupling Facility
           – Data Sharing Structures
           – The application's perspective

         Performance topics don’t include
           – Other forms of Data Sharing eg VSAM RLS
           – Overly detailed descriptions



                                                                       © 2011 IBM Corporation
2
IBM System z Technical University – Vienna , Austria – May 2-6




Agenda

         What’s The Point Of Data Sharing?

         Introduction to Parallel Sysplex

         Introduction to DB2 Data Sharing

         Performance

         Summary




                                                                 © 2011 IBM Corporation
3
IBM System z Technical University – Vienna , Austria – May 2-6




What’s The Point of Data Sharing?

         Higher Availability
              – Reduction in single points of failure
                    • If configured properly
                    • If effectively planned
         Greater Scalability
              – Additional z/OS LPARs providing additional resources
                    • Typically on different footprints
                    • Potentially “on the fly”
                    • Nearly linear
              – Additional DB2 subsystems providing more virtual storage
         Software Licencing Considerations
              – eg Parallel Sysplex Licence Charge (PSLC)




                                                                       © 2011 IBM Corporation
4
IBM System z Technical University – Vienna , Austria – May 2-6




Hot Standby Scenarios

         Many installations configure redundant LPARs
           – Actively processing their share of the load
           – Spread 1 or more to 1 or more footprints
           – Examples:
                   • 1 LPAR on each machine
                   • 2 LPARs on each of two machines
              – When one Data Sharing member fails the rest service the work
              – Issues:
                   •   How do you avoid affinities?
                   •   How do you ensure work gets routed to the right surviving LPAR?
                   •   In general, what ARE your recovery policies?
                   •   How do you avoid the cost of too many LPARs?
                   •   Should we have hot standby DB2 Subsystems instead of LPARs?




                                                                                         © 2011 IBM Corporation
5
Technical Introduction




                         © 2011 IBM Corporation
IBM System z Technical University – Vienna , Austria – May 2-6




Major Parallel Sysplex Components


                 Member 1                                              Member 2

               Application                    XCF                XCF       Application




                                                                 Coupling
                                                                  Facility
                Structures
                                                                 Sysplex
                                                                  Timer
                                                                                     © 2011 IBM Corporation
7
IBM System z Technical University – Vienna , Austria – May 2-6




Major Parallel Sysplex Components

     Coupling Facilities (CFs)
        – With structures
        – Usually more than one CF
        – Run CFCC code rather than z/OS
     Members
        – Such as LPARs
        – Up to 32 members
     Links between Members and CFs
         – Configured for bandwidth and availability
     XCF
        – Provides a communications mechanism
     Applications
        – Exploiting CF structures directly
        – Using XCF services
     Timer
        – Ensures all members are synchronised
        – Traditionally Sysplex Timer
        – Strategically Server Time Protocol network
     XES manages communications


                                                                 © 2011 IBM Corporation
8
IBM System z Technical University – Vienna , Austria – May 2-6




XCF
         General signalling mechanism
              – Introduced before the other Parallel Sysplex functions
         Traffic divided into transport classes
            – These use either Coupling Facility structures or CTCs to pass messages
                    • Dynamically routed based on XCF observing performance
                    • Dedicated CTCs
                         – Originally were faster than CF structures
                         – Must define a pair of paths for each connection
                         – Definition can get quite complex
              – Transport classes have a maximum message size
                    • Fitting messages to classes is a significant tuning item
                         – One message per buffer
                            > Buffer space wasted if message smaller than the buffer
                            > Additional processing if it’s bigger
         Applications use specific XCF group names
           – Example: IXCLOxxx is XES lock resolution
           – Application address spaces connect as members of the group




                                                                                       © 2011 IBM Corporation
9
IBM System z Technical University – Vienna , Austria – May 2-6




Coupling Facilities

         Usually on same footprint as z/OS, Linux or z/VM images
              –   Called an Integrated Coupling Facility (ICF)
              –   ICFs generally on (cheaper) PUs characterised as ICF PUs
              –   Can be stand-alone
              –   If on same footprint as z/OS images using the CF a footprint failure
                  would bring down both z/OS images and the ICF
         Speed of CF PUs relative to sharing z/OS image PUs important for
          performance
           – ICF PUs automatically matched to z/OS PUs on the same footprint
         CF PUs can be shared
              – Generally reduces coupling performance
                    • Especially if Dynamic Dispatch turned on
              – Only recommended for Development and Test Parallel Sysplexes
         CFCC code releases called Coupling Facility Levels (CFLEVELS)

                                                                                   © 2011 IBM Corporation
10
IBM System z Technical University – Vienna , Austria – May 2-6


Coupling Facility Structures
      Cache Structures
          – Data is cached
                • Requests await the return of data
          – Example: Enhanced Catalog Sharing
                • SYSIGGCAS_ECS
      Lock Structures
          – Control granting of locks
                • Requests don’t await the return of the lock
          – Example:GRS Star
                • ISGLOCK
      List Structures
          – Groups of “lists”
                • Which are more like arrays
          – Example: XCF
                • IXCPATH_CFP2
      Serialized List Structures
          – Example Websphere MQ queues
                • MQPGCSQ_ADMIN
      Different performance and availability characteristics for each exploiter

                                                                                   © 2011 IBM Corporation
11
IBM System z Technical University – Vienna , Austria – May 2-6




CFSIZER

         Website that enables you to size CF structures
              – http://www.ibm.com/systems/z/cfsizer/
              – Most IBM Product structures catered for
                    • Given a product name it tells you which structures you want
                       – And suggests a size
              – Produces an “initial” configuration
                    • For example DB2 structures are likely to need tuning
                       – In fact CFSIZER probably suggests to you too few GBPs
              – Has good Help
              – Recently updated




                                                                                    © 2011 IBM Corporation
12
IBM System z Technical University – Vienna , Austria – May 2-6


Coupling Facility Links
    Different types:
         – Internal Coupling (IC)
             • Extremely fast, within one footprint
         – Integrated Cluster Bus ( ICB-4 and ICB-3)
             • Fast, very short-distance links between footprints
         – Inter-Systems Coupling (ISC-3)
             • Slower, longer distance
                    –   Speed decreases with distance
               • Needed for eg cross-town sysplexes
         – Infiniband Statement of Direction for System z10 EC

    Redundancy and bandwidth are both important
    Each generation of each of these provides greater capability
         – Typically speed
         – Generations tend to go with processor families
    Typical configuration:
       – 2 footprints
               • Each has 1 or 2 member z/OS images
               • Each has 1 Internal Coupling Facility (ICF) image
               • IC links between members and ICF image on the same footprint
                    –   ISC links between a member and its remote ICF
                          > ICB links would imply footprints physically very close




                                                                                     © 2011 IBM Corporation
13
IBM System z Technical University – Vienna , Austria – May 2-6


Synchronous and Asynchronous CF Requests
   Synchronous (Sync)
        – z/OS engine waits for completion
           • Each microsecond of request service time is a microsecond of lost engine capacity
        – e.g GRS Star
   Asynchronous (Async)
        – z/OS engine does not wait for completion
        – Response times usually longer than for Sync requests
        – e.g XCF signalling
   Automatic Sync to Async Conversion
        – Algorithm introduced by z/OS Release 2
        – Requests converted wholesale
        – With conversion an occasional request is tried as Sync
           • Governs whether conversion is the right thing to do
           • Factors
                   –   Larger data transfer
                   –   Longer / slower links
                   –   Processor speed
                   –   Duplexing
        – Thresholds recently refined



                                                                                             © 2011 IBM Corporation
14
IBM System z Technical University – Vienna , Austria – May 2-6




Structure Duplexing
         2 copies of the same structure in different CFs
              – Maintained in sync
              – Higher resilience to component failures
                 • Loss of z/OS images and ICF on same footprint less likely to cause an outage
                 • Faster than structure rebuild
         Bidirectional links required between the two CFs
              – Preferably more than one
         User-Managed
              – “User” is DB2
              – DB2 writes data to both structures
                 • Async write to primary
                 • Then Sync write to secondary
                 • Completion when both writes succeed
                 • Reads only from the Primary
                 • In event of failure reads from Secondary
         System-Managed
              – XES writes to primary and secondary
                 • Both CFs communicate through a separate link to ensure status is shared
                 • Request only completes when both structures have been accessed



                                                                                                  © 2011 IBM Corporation
15
IBM System z Technical University – Vienna , Austria – May 2-6


CFLEVELS
   Coupling Facility Code Releases
        – Loosely related to processor family
        – New functions and algorithm enhancements introduced this way
           • Eg CFLEVEL=11 is required for structure duplexing
           • Exploitation may require corequisite functions in eg z/OS or DB2
                   – Eg z/OS Release 2 + PTFs for System-Managed Duplexing
   Footprints can have different CFLEVELS for each LPAR
        – An LPAR can only run one level
        – This facility to ease migration
   Structures occasionally need to increase in size when upgrading
   Useful information in
    http://www-03.ibm.com/servers/eserver/zseries/pso/cftable.html
        – Brief summary of CFLEVEL features
        – Matrix of processor support for each CFLEVEL

   Latest CFLEVEL is 15
        – More concurrent CFCC tasks
        – Base for reduced synchronisation traffic for Structure Duplexing
        – Structure-level CPU recording



                                                                                © 2011 IBM Corporation
16
IBM System z Technical University – Vienna , Austria – May 2-6



Intelligent Resource Director
   Manages resources within a “LPAR Cluster”
        – The members of a parallel sysplex on ONE machine
              • Physically impossible to move resources BETWEEN machines
   Varies on and offline Logical CPs
   Manages LPAR weights between members
        – The total for the cluster remains constant
   Manages CHPIDs between members
   Memory NOT managed
   Decisions based on WLM Goal attainment and PR/SM’s view of resource utilisation
   Helps “takeover” scenario
        – Eg LPAR weights move to the surviving member(s) on the machine




                                                                               © 2011 IBM Corporation
17
IBM System z Technical University – Vienna , Austria – May 2-6




Workload Distribution Mechanism Examples

         WLM
              – Batch Initiators – JES2 and JES3
              – VTAM Generic Resources
              – Sysplex Distributor for TCP/IP
         DB2 Data Sharing Group Attach

         CPSM
              – Dynamic CICS workload management
              – Plus many other functions for managing CICS regions




                                                                      © 2011 IBM Corporation
18
IBM System z Technical University – Vienna , Austria – May 2-6




Major DB2 Data Sharing Components
                                                          Shared Data

            z/OS Image 1                                                                  z/OS Image 2

  DB2 1          IRLM 1                       XCF                                       XCF         IRLM 2           DB2 2




                                                                 XCF Structures


                                                                          LOCK1         Coupling
                                                                 GBPs
                                                                                  SCA    Facility

                                                                                        Sysplex
                                                                                         Timer
                                                                                                             © 2011 IBM Corporation
19
IBM System z Technical University – Vienna , Austria – May 2-6




Major DB2 Data Sharing Components

         Locking
              – DB2 Subsystems in a group in one z/OS image share an IRLM address space
              – IRLMs communicate through their LOCK1 structure
                  • groupname_LOCK1
              – IRLMs also communicate via XCF
                  • DXRnnn groups
                  • XES locking services also use XCF
                        – IXCLOnnn groups
         Group Buffer pools
              – 1 CF structure per GBP
                 • groupname_GBPn
                 • Members connect directly to GBP structures
         Shared Communications Area (SCA)
              – Status sharing
                 • groupname_SCA
                 • Much lower activity than LOCK1 and GBPs
                        – Not usually considered for tuning
                   • Members connect directly to SCA structure




                                                                                          © 2011 IBM Corporation
20
IBM System z Technical University – Vienna , Austria – May 2-6




Locking - LOCK1 Structure

         Locks must be known and respected between members
              – Data Sharing uses global locks to achieve this
         But not all locks need to be propagated to achieve this
              – Only the most restrictive state needs be propagated
         Locking is propagated from IRLM, via XES to the LOCK1 CF structure
              – IRLM knows about locking states that XES doesn’t
                   • XES only knows about “shared” and “exclusive” locks
                   • DB2 had many more states, even before Data Sharing




                                                                           © 2011 IBM Corporation
21
IBM System z Technical University – Vienna , Austria – May 2-6




Locking - LOCK1 Structure

              – LOCK1 Structure has two parts
                    • Lock table aka “Hash table”
                         – Each entry has:
                            > Resource name
                            > First system to have exclusive interest
                            > Flags for each system that has shared interest
                         – Entry size controls maximum number of connecting members
                            > 2 bytes up to 6 members
                            > 4 bytes up to 22 members
                         – Generally fewer entries than there are resources to lock
                            > Real resources hash to resource names
                    • Modified resource list
                         – Used for recovery purposes
                         – Less interesting – from a tuning perspective




                                                                                      © 2011 IBM Corporation
22
IBM System z Technical University – Vienna , Austria – May 2-6




Locking - LOCK1 Structure

         Contention types:
              – Real Contention
                    • Different members really do need to use the same resource at the
                      same time
                    • Real application delay inherent while the holder retains the lock
              – XES Contention
                    • When XES believes there is contention but IRLM knows there isn’t –
                      because of its more comprehensive view of locking
                         – IRLMs have to talk via XCF to resolve this - DXRnnn
              – False Contention
                    • When the hashing algorithm for the lock table provides the same
                      hash value for two different resources
                         – XESs have to talk via XCF to resolve this - IXCLOnnn




                                                                                    © 2011 IBM Corporation
23
IBM System z Technical University – Vienna , Austria – May 2-6




Group Buffer Pools & Structures – How They Work - 1
         Inter-DB2 Read / Write interest:
              – When one member has a write interest and at least one has a read
                interest
              – Tracked via Page Set Physical locks
                  • Always propagated to the LOCK1 structure, even when only one
                    member
                         – Some cost for a single member data sharing group
              – If there is Inter-DB2 Read / Write interest in an object:
                  • The buffer pools in the members cooperate via the corresponding
                    Group Buffer Pool
                         – “GBP Dependency”
              – Objects can go into and out of GBP Dependency
                    • “Read Only Switching” (RO Switching)
                    • Detected by Data Sharing Group
                         – PCLOSET and PCLOSEN parameters affect how often to check
                    • Affects GBP dependency
                         – Trade off between GBP Dependency time and RO Switching rate


                                                                                      © 2011 IBM Corporation
24
IBM System z Technical University – Vienna , Austria – May 2-6


Group Buffer Pools & Structures – How They Work - 2
      Updates cause Cross Invalidation
          – DB2 members check their copy of a page is valid before using it
             • Via a bitmap that each system’s XES maintains in memory
          – To use an invalidated page the member retrieves it from the GBP
          – Updates to the GBP generally happen at an application’s Commit point
             • Synchronously forcing changed pages to the GBP
                     – “Force At Commit”
                • Sometimes we get writes to the GBP when locking at row level
                     – To allow updated page to be retrieved by another member
      Local pool misses search the GBP first
        – So the GBP can act as an extra layer of buffering
        – Retrieval would generally be quicker than from disk / cache
      At intervals the Castout process purges some pages from the GBP
        – Written back out through the local DB2 subsystems
        – Threshold driven



                                                                                 © 2011 IBM Corporation
25
IBM System z Technical University – Vienna , Austria – May 2-6


Group Buffer Pools & Structures – How They Work - 3
         Installation can control regimes
              – At Group Buffer Pool Level
              – By object
         GBPCACHE(CHANGED)
                    • Only the writes are cached
         GBPCACHE(ALL)
              – Cached as they are read
                    • Even if no Inter-DB2 R/W Interest
              – Writes also cached
         GBPCACHE(NONE)
              – No cacheing done
                    • Just used as a Cross-Invalidation mechanism
         GBPCACHE(SYSTEM)
              – For a special kind of tablespace called a LOB
              – Only certain types of pages are cached


                                                                    © 2011 IBM Corporation
26
Performance




              © 2011 IBM Corporation
IBM System z Technical University – Vienna , Austria – May 2-6




Tuning Objectives

         Response Times
              – Additional time could be caused by CF activities
                    • Also additional locking problems
         Throughput
              – For a given capacity throughput might be less
         Minimised Cost
              – Minimise the configuration needed for data sharing
                    • So long as that doesn’t conflict with other objectives
         Robustness
              – Ensure performance doesn’t degrade with load



                                                                               © 2011 IBM Corporation
28
IBM System z Technical University – Vienna , Austria – May 2-6


Parallel Sysplex Instrumentation
   XCF:
        – SMF 74 Subtype 2
              • RMF XCF Activity Report
                   – Applications
                   – Groups
                   – Paths
                      > CTCs are treated like real devices so SMF 74-1, 73 and 78-3 can be useful
                   – Members
                      > Job name in z/OS R.9
        – DISPLAY XCF operator command
   Coupling Facility
        – SMF 74 Subtype 4
              • RMF Coupling Facility Activity Report
                   –   Usage Summary Section – Structure sizes and CPU usage
                   –   Structure Activity Section
                   –   Subchannel Activity Section – Path / Subchannel information
                   –   CF to CF Section – Duplexing traffic at the CF level

                                                                                        © 2011 IBM Corporation
29
IBM System z Technical University – Vienna , Austria – May 2-6


Data Sharing Instrumentation
   Accounting Trace
        – Generally provides a time breakdown for each application
              • Plan, Correlation ID and Package level
              • Excellent tuning instrumentation for applications
        – Includes:
              • Global Lock Wait
              • Time to retrieve pages from GBP
                   – Subsumed within Sync DB Wait and Async Read
              • Time for commits
                   – Can involve GBP traffic
        – Activities
              • Group Buffer Pool
              • Global Locking

   Statistics Trace
        – Activities
              • Group Buffer Pool
                   – Note: MXG change 25.075 required to support incompatibly-changed DB2 Version 8 GBP
                     statistics
              • Global Locking
                                                                                            © 2011 IBM Corporation
30
IBM System z Technical University – Vienna , Austria – May 2-6




Capacity and “White Space”

         “White Space” is capacity which needs to be kept free for
          oncoming work from other Coupling Facilities
              – Memory, CPU and links
              – Duplexing reduces the need for this


         Coupling Facility Control Code requires CPU utilisation below
          about 50%
              – Above that response times begin to degrade
                    • With impact on coupling cost to z/OS images and on response
                      times




                                                                               © 2011 IBM Corporation
31
IBM System z Technical University – Vienna , Austria – May 2-6




Link Performance

         Configure the fastest suitable links
              – Type: IC vs ICB vs ISC vs CIB vs CIB LR
              – Generation: eg ISC-3 vs ISC-2
         Configure enough of them
         Use Peer Mode
         Monitor for
              – Signalling response times
              – Path Busy Conditions
              – Subchannel Busy Conditions
              – Request Failures




                                                                 © 2011 IBM Corporation
32
IBM System z Technical University – Vienna , Austria – May 2-6

XCF Tuning
          Aim to reduce transfer times
               – “Mean Transfer Time” MXFER TIME in RMF
          Aim to minimise traffic
               – Rates at all levels in RMF
               – Eg Minimise Locking False Contention
               – Eg Set up GRS Star in a way that minimises ENQs
          Optimise the use of links
               – More modern CF-based links tend to outperform CTCs
                  • But CTCs still better for small messages
                  • CTCs drive SAP utilisation
               – RMF counts the number of times each path was chosen
                     • Understand why signals use the paths in the ratio they do
          Transport Class buffer sizes
               – Buffers that are too big waste memory
               – Buffers that are too small have to be expanded
                  • Sometimes with cost
               – RMF has counts of “Fit”, “Small”, “Big” and “Big With Overhead” messages
               – RMF lists transport classes and their maximum buffer size values




                                                                                            © 2011 IBM Corporation
33
IBM System z Technical University – Vienna , Austria – May 2-6


Group Buffer Pool Tuning
         GBP tuning has similarities to local pool tuning
              – But some twists
         Important to minimise traffic
              – Application GETPAGEs in general
              – Traffic to GBPs
         Also important to minimise response times
              – Which is mainly a matter of tuning the underlying CF access
         Minimising the amount of data actually shared may be practical
              – For many designs it isn’t
         Important to avoid invalidations due to too few directory entries
              – GBP space divided into Directory entries and Data elements
                   • Directory entry reclaims if too few entries
                        – Causes invalidations of local buffers
                   • Installation can alter the balance
                   • Installation can increase the size of the group buffer pool


                                                                                   © 2011 IBM Corporation
34
IBM System z Technical University – Vienna , Austria – May 2-6

Group Buffer Pool Tuning - Traffic
   Reads:
        – Cross invalidation reads
              • Data returned i.e. is in GBP
              • Data not returned i.e. is known to be down level but page not in the GBP
                   – Requires disk read
        – Buffer pool miss reads
              • Data returned i.e not in the local pool but is in the GBP
              • Data not returned i.e is in neither the local pool nor the GBP
        – A bigger GBP ought to provide more hits and fewer misses
              • But I rarely see high GBP hit ratios
   Writes:
        – Avoid GBPCACHE(NONE) as writes are SYNCHRONOUS TO DISK at Commit time
              • Harmful to the Committing unit of work’s performance
        – Writes can also be caused by the LOCAL pool’s Deferred Write thresholds being hit
              • In this case Commits aren’t waited for


   Castouts:
        – Dribbling out a good idea
              • Just like for local pools



                                                                                           © 2011 IBM Corporation
35
IBM System z Technical University – Vienna , Austria – May 2-6




Locking Tuning

         It’s important to reduce locking traffic at all levels
              – Application
              – DB2 subsystem
         It’s also important to reduce False Contention
              – Usually by increasing the Lock Table portion of the LOCK1 structure
                    • Number of entries will be a power of 2
              – 4-byte lock table entry means fewer entries for same size than 2-byte
         It’s nice that in DB2 Version 8 there’s a remapping of IRLM lock states to
          XES ones
              – May reduce XES lock contention
         CF Request response times also important




                                                                               © 2011 IBM Corporation
36
IBM System z Technical University – Vienna , Austria – May 2-6




Special Coupling Facility Commands

      A number of special commands have been introduced to make CF
       requests more efficient
          – Generally have CFLEVEL prerequisites
          – READ_COCLASS
                • DB2 Version 6
          – WARM
                • DB2 Version 8
          – RFCOM
                • DB2 Version 8
      DB2 Version 8 Statistics Trace instruments WARM and RFCOM



                                                                 © 2011 IBM Corporation
37
IBM System z Technical University – Vienna , Austria – May 2-6




DB2 Version 9

         Restart performance improved
         Command to remove GBP Dependency at object level
              – ACCESS DB MODE(NGBPDEP)
              – Typical usage would be before batch run
                    • Issue on member where you plan to run the job
         Improved performance for GBP writes
         DB2 overall health taken into account for WLM routing
         Balance Group Attach connections across members on same LPAR
              – Usermod to Versions 7 and 8
         etc…




                                                                      © 2011 IBM Corporation
38
IBM System z Technical University – Vienna , Austria – May 2-6


Summary
  Parallel Sysplex has many benefits

       – More fully realised with Data Sharing
  Need to manage carefully performance and cost

  Configuration choices make an enormous difference

  Avoid shared coupling facilities for Production

  Good monitoring tools for both z/OS / Hardware, and DB2

  Tune not only DB2 structure and XCF Performance

       – But also other structures and users of XCF




                                                                 © 2011 IBM Corporation
39

More Related Content

What's hot

The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!Craig Mullins
 
zOSMF SDSF_ShareLab_V2R5.pdf
zOSMF SDSF_ShareLab_V2R5.pdfzOSMF SDSF_ShareLab_V2R5.pdf
zOSMF SDSF_ShareLab_V2R5.pdfMarna Walle
 
Upgrade to IBM z/OS V2.4 planning
Upgrade to IBM z/OS V2.4 planningUpgrade to IBM z/OS V2.4 planning
Upgrade to IBM z/OS V2.4 planningMarna Walle
 
JCL MAINFRAMES
JCL MAINFRAMESJCL MAINFRAMES
JCL MAINFRAMESkamaljune
 
Hints for a successful hfs to zfs migration
Hints for a successful hfs to zfs migrationHints for a successful hfs to zfs migration
Hints for a successful hfs to zfs migrationsatish090909
 
How should I monitor my idaa
How should I monitor my idaaHow should I monitor my idaa
How should I monitor my idaaCuneyt Goksu
 
FlashCopy and DB2 for z/OS
FlashCopy and DB2 for z/OSFlashCopy and DB2 for z/OS
FlashCopy and DB2 for z/OSFlorence Dubois
 
z/OS Communications Server Overview
z/OS Communications Server Overviewz/OS Communications Server Overview
z/OS Communications Server OverviewzOSCommserver
 
Sysplex in a Nutshell
Sysplex in a NutshellSysplex in a Nutshell
Sysplex in a NutshellzOSCommserver
 
z/OS Communications Server Technical Update
z/OS Communications Server Technical Updatez/OS Communications Server Technical Update
z/OS Communications Server Technical UpdatezOSCommserver
 
Z OS IBM Utilities
Z OS IBM UtilitiesZ OS IBM Utilities
Z OS IBM Utilitieskapa rohit
 
TCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CS
TCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CSTCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CS
TCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CSzOSCommserver
 
Upgrade to IBM z/OS V2.4 technical actions
Upgrade to IBM z/OS V2.4 technical actionsUpgrade to IBM z/OS V2.4 technical actions
Upgrade to IBM z/OS V2.4 technical actionsMarna Walle
 
z/OS Communications Server: z/OS Resolver
z/OS Communications Server: z/OS Resolverz/OS Communications Server: z/OS Resolver
z/OS Communications Server: z/OS ResolverzOSCommserver
 
ALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARMALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARMIBM
 
LTM essentials
LTM essentialsLTM essentials
LTM essentialsbharadwajv
 

What's hot (20)

IP Routing on z/OS
IP Routing on z/OSIP Routing on z/OS
IP Routing on z/OS
 
The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!
 
zOSMF SDSF_ShareLab_V2R5.pdf
zOSMF SDSF_ShareLab_V2R5.pdfzOSMF SDSF_ShareLab_V2R5.pdf
zOSMF SDSF_ShareLab_V2R5.pdf
 
Upgrade to IBM z/OS V2.4 planning
Upgrade to IBM z/OS V2.4 planningUpgrade to IBM z/OS V2.4 planning
Upgrade to IBM z/OS V2.4 planning
 
JCL MAINFRAMES
JCL MAINFRAMESJCL MAINFRAMES
JCL MAINFRAMES
 
Ipl process
Ipl processIpl process
Ipl process
 
Hints for a successful hfs to zfs migration
Hints for a successful hfs to zfs migrationHints for a successful hfs to zfs migration
Hints for a successful hfs to zfs migration
 
How should I monitor my idaa
How should I monitor my idaaHow should I monitor my idaa
How should I monitor my idaa
 
FlashCopy and DB2 for z/OS
FlashCopy and DB2 for z/OSFlashCopy and DB2 for z/OS
FlashCopy and DB2 for z/OS
 
z/OS Communications Server Overview
z/OS Communications Server Overviewz/OS Communications Server Overview
z/OS Communications Server Overview
 
Sysplex in a Nutshell
Sysplex in a NutshellSysplex in a Nutshell
Sysplex in a Nutshell
 
z/OS Communications Server Technical Update
z/OS Communications Server Technical Updatez/OS Communications Server Technical Update
z/OS Communications Server Technical Update
 
Vsam
VsamVsam
Vsam
 
Z OS IBM Utilities
Z OS IBM UtilitiesZ OS IBM Utilities
Z OS IBM Utilities
 
Networking on z/OS
Networking on z/OSNetworking on z/OS
Networking on z/OS
 
TCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CS
TCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CSTCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CS
TCP/IP Stack Configuration with Configuration Assistant for IBM z/OS CS
 
Upgrade to IBM z/OS V2.4 technical actions
Upgrade to IBM z/OS V2.4 technical actionsUpgrade to IBM z/OS V2.4 technical actions
Upgrade to IBM z/OS V2.4 technical actions
 
z/OS Communications Server: z/OS Resolver
z/OS Communications Server: z/OS Resolverz/OS Communications Server: z/OS Resolver
z/OS Communications Server: z/OS Resolver
 
ALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARMALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARM
 
LTM essentials
LTM essentialsLTM essentials
LTM essentials
 

Viewers also liked

World of Watson - DB2 for Linux, UNIX and Windows Roadmap
World of Watson - DB2 for Linux, UNIX and Windows RoadmapWorld of Watson - DB2 for Linux, UNIX and Windows Roadmap
World of Watson - DB2 for Linux, UNIX and Windows RoadmapIBM_Info_Management
 
Informix 12.10.xC7 MQTT listener - june2016
Informix 12.10.xC7 MQTT listener -  june2016Informix 12.10.xC7 MQTT listener -  june2016
Informix 12.10.xC7 MQTT listener - june2016Shawn Moe
 
Planning and executing a DB2 11 for z/OS Migration by Ian Cook
Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook
Planning and executing a DB2 11 for z/OS Migration by Ian Cook Surekha Parekh
 
ISSA: Next Generation Tokenization for Compliance and Cloud Data Protection
ISSA: Next Generation Tokenization for Compliance and Cloud Data ProtectionISSA: Next Generation Tokenization for Compliance and Cloud Data Protection
ISSA: Next Generation Tokenization for Compliance and Cloud Data ProtectionUlf Mattsson
 
Io t world_2016_iot_smart_gateways_moe
Io t world_2016_iot_smart_gateways_moeIo t world_2016_iot_smart_gateways_moe
Io t world_2016_iot_smart_gateways_moeShawn Moe
 
CSCSS Science of Security - Developing Scientific Foundations for the Operati...
CSCSS Science of Security - Developing Scientific Foundations for the Operati...CSCSS Science of Security - Developing Scientific Foundations for the Operati...
CSCSS Science of Security - Developing Scientific Foundations for the Operati...Shawn Riley
 
Here For Good Report 2016 - Ganesh
Here For Good Report 2016 - GaneshHere For Good Report 2016 - Ganesh
Here For Good Report 2016 - GaneshganeshMIT
 
IBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration BasicsIBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration BasicsIBM
 
Ibm_IoT_Architecture_and_Capabilities
Ibm_IoT_Architecture_and_CapabilitiesIbm_IoT_Architecture_and_Capabilities
Ibm_IoT_Architecture_and_CapabilitiesIBM_Info_Management
 
2008-10-15 Red Hat Deep Dive Sessions: SELinux
2008-10-15 Red Hat Deep Dive Sessions: SELinux2008-10-15 Red Hat Deep Dive Sessions: SELinux
2008-10-15 Red Hat Deep Dive Sessions: SELinuxShawn Wells
 
2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...
2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...
2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...Shawn Wells
 
2016-08-29 AFITC Security Automation
2016-08-29 AFITC Security Automation2016-08-29 AFITC Security Automation
2016-08-29 AFITC Security AutomationShawn Wells
 
2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...
2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...
2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...Shawn Wells
 
IBM World of Watson 2016 - DB2 Analytics Accelerator on Cloud
IBM World of Watson 2016 - DB2 Analytics Accelerator on CloudIBM World of Watson 2016 - DB2 Analytics Accelerator on Cloud
IBM World of Watson 2016 - DB2 Analytics Accelerator on CloudDaniel Martin
 
S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?
S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?
S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?nick_garrod
 
2016 -11-18 OpenSCAP Workshop Coursebook
2016 -11-18 OpenSCAP Workshop Coursebook2016 -11-18 OpenSCAP Workshop Coursebook
2016 -11-18 OpenSCAP Workshop CoursebookShawn Wells
 

Viewers also liked (17)

World of Watson - DB2 for Linux, UNIX and Windows Roadmap
World of Watson - DB2 for Linux, UNIX and Windows RoadmapWorld of Watson - DB2 for Linux, UNIX and Windows Roadmap
World of Watson - DB2 for Linux, UNIX and Windows Roadmap
 
Informix 12.10.xC7 MQTT listener - june2016
Informix 12.10.xC7 MQTT listener -  june2016Informix 12.10.xC7 MQTT listener -  june2016
Informix 12.10.xC7 MQTT listener - june2016
 
Planning and executing a DB2 11 for z/OS Migration by Ian Cook
Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook
Planning and executing a DB2 11 for z/OS Migration by Ian Cook
 
ISSA: Next Generation Tokenization for Compliance and Cloud Data Protection
ISSA: Next Generation Tokenization for Compliance and Cloud Data ProtectionISSA: Next Generation Tokenization for Compliance and Cloud Data Protection
ISSA: Next Generation Tokenization for Compliance and Cloud Data Protection
 
Io t world_2016_iot_smart_gateways_moe
Io t world_2016_iot_smart_gateways_moeIo t world_2016_iot_smart_gateways_moe
Io t world_2016_iot_smart_gateways_moe
 
CSCSS Science of Security - Developing Scientific Foundations for the Operati...
CSCSS Science of Security - Developing Scientific Foundations for the Operati...CSCSS Science of Security - Developing Scientific Foundations for the Operati...
CSCSS Science of Security - Developing Scientific Foundations for the Operati...
 
Here For Good Report 2016 - Ganesh
Here For Good Report 2016 - GaneshHere For Good Report 2016 - Ganesh
Here For Good Report 2016 - Ganesh
 
IBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration BasicsIBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration Basics
 
Ibm_IoT_Architecture_and_Capabilities
Ibm_IoT_Architecture_and_CapabilitiesIbm_IoT_Architecture_and_Capabilities
Ibm_IoT_Architecture_and_Capabilities
 
2008-10-15 Red Hat Deep Dive Sessions: SELinux
2008-10-15 Red Hat Deep Dive Sessions: SELinux2008-10-15 Red Hat Deep Dive Sessions: SELinux
2008-10-15 Red Hat Deep Dive Sessions: SELinux
 
2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...
2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...
2016-08-24 FedInsider Webinar with Jennifer Kron - Securing Intelligence in a...
 
2016-08-29 AFITC Security Automation
2016-08-29 AFITC Security Automation2016-08-29 AFITC Security Automation
2016-08-29 AFITC Security Automation
 
2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...
2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...
2017-02-21 AFCEA West Building Continuous Integration & Deployment (CI/CD) Pi...
 
IBM World of Watson 2016 - DB2 Analytics Accelerator on Cloud
IBM World of Watson 2016 - DB2 Analytics Accelerator on CloudIBM World of Watson 2016 - DB2 Analytics Accelerator on Cloud
IBM World of Watson 2016 - DB2 Analytics Accelerator on Cloud
 
S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?
S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?
S108 - 1 Billion Smartphones a year and counting – How is your CICS connected?
 
Enterprise, Architecture and IoT
Enterprise, Architecture and IoTEnterprise, Architecture and IoT
Enterprise, Architecture and IoT
 
2016 -11-18 OpenSCAP Workshop Coursebook
2016 -11-18 OpenSCAP Workshop Coursebook2016 -11-18 OpenSCAP Workshop Coursebook
2016 -11-18 OpenSCAP Workshop Coursebook
 

Similar to DB2 Data Sharing Performance Beginners Guide

Mpls conference 2016-data center virtualisation-11-march
Mpls conference 2016-data center virtualisation-11-marchMpls conference 2016-data center virtualisation-11-march
Mpls conference 2016-data center virtualisation-11-marchAricent
 
Oracle rac 10g best practices
Oracle rac 10g best practicesOracle rac 10g best practices
Oracle rac 10g best practicesHaseeb Alam
 
Parallel Sysplex Performance Topics
Parallel Sysplex Performance TopicsParallel Sysplex Performance Topics
Parallel Sysplex Performance TopicsMartin Packer
 
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)Heiko Joerg Schick
 
What's new in informix v11.70
What's new in informix v11.70What's new in informix v11.70
What's new in informix v11.70am_prasanna
 
Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016Anderson Bassani
 
ln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmm
ln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmmln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmm
ln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmmpeterhaile1
 
Re-Thinking Architectures for Next-gen Aircraft Connectivity - Astronics
Re-Thinking Architectures for Next-gen Aircraft Connectivity - AstronicsRe-Thinking Architectures for Next-gen Aircraft Connectivity - Astronics
Re-Thinking Architectures for Next-gen Aircraft Connectivity - AstronicsAstronics Corporation
 
InfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUX
InfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUXInfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUX
InfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUXIBMInfoSphereUGFR
 
F9: A Secure and Efficient Microkernel Built for Deeply Embedded Systems
F9: A Secure and Efficient Microkernel Built for Deeply Embedded SystemsF9: A Secure and Efficient Microkernel Built for Deeply Embedded Systems
F9: A Secure and Efficient Microkernel Built for Deeply Embedded SystemsNational Cheng Kung University
 
S016826 cloud-storage-nola-v1710d
S016826 cloud-storage-nola-v1710dS016826 cloud-storage-nola-v1710d
S016826 cloud-storage-nola-v1710dTony Pearson
 
S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5Tony Pearson
 
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Novell
 
4156 Twist and cloud-how ibm customers make cics dance
4156 Twist and cloud-how ibm customers make cics dance4156 Twist and cloud-how ibm customers make cics dance
4156 Twist and cloud-how ibm customers make cics dancenick_garrod
 

Similar to DB2 Data Sharing Performance Beginners Guide (20)

GDPS and System Complex
GDPS and System ComplexGDPS and System Complex
GDPS and System Complex
 
Much Ado about CPU
Much Ado about CPUMuch Ado about CPU
Much Ado about CPU
 
Much Ado About CPU
Much Ado About CPUMuch Ado About CPU
Much Ado About CPU
 
Mpls conference 2016-data center virtualisation-11-march
Mpls conference 2016-data center virtualisation-11-marchMpls conference 2016-data center virtualisation-11-march
Mpls conference 2016-data center virtualisation-11-march
 
Oracle rac 10g best practices
Oracle rac 10g best practicesOracle rac 10g best practices
Oracle rac 10g best practices
 
Parallel Sysplex Performance Topics
Parallel Sysplex Performance TopicsParallel Sysplex Performance Topics
Parallel Sysplex Performance Topics
 
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
 
What's new in informix v11.70
What's new in informix v11.70What's new in informix v11.70
What's new in informix v11.70
 
Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016
 
Distributed Systems.ppt
Distributed Systems.pptDistributed Systems.ppt
Distributed Systems.ppt
 
ln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmm
ln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmmln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmm
ln13-ds.pptefefdfdgdgerhfhgjhmmmmmmmmmmm
 
Re-Thinking Architectures for Next-gen Aircraft Connectivity - Astronics
Re-Thinking Architectures for Next-gen Aircraft Connectivity - AstronicsRe-Thinking Architectures for Next-gen Aircraft Connectivity - Astronics
Re-Thinking Architectures for Next-gen Aircraft Connectivity - Astronics
 
Distributed systems
Distributed systemsDistributed systems
Distributed systems
 
InfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUX
InfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUXInfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUX
InfoSphere Streams Technical Overview - Use Cases Big Data - Jerome CHAILLOUX
 
F9: A Secure and Efficient Microkernel Built for Deeply Embedded Systems
F9: A Secure and Efficient Microkernel Built for Deeply Embedded SystemsF9: A Secure and Efficient Microkernel Built for Deeply Embedded Systems
F9: A Secure and Efficient Microkernel Built for Deeply Embedded Systems
 
S016826 cloud-storage-nola-v1710d
S016826 cloud-storage-nola-v1710dS016826 cloud-storage-nola-v1710d
S016826 cloud-storage-nola-v1710d
 
S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5
 
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
 
4156 Twist and cloud-how ibm customers make cics dance
4156 Twist and cloud-how ibm customers make cics dance4156 Twist and cloud-how ibm customers make cics dance
4156 Twist and cloud-how ibm customers make cics dance
 
High Availability and Xen
High Availability and XenHigh Availability and Xen
High Availability and Xen
 

More from Martin Packer

zIIP Capacity Planning - May 2018
zIIP Capacity Planning - May 2018zIIP Capacity Planning - May 2018
zIIP Capacity Planning - May 2018Martin Packer
 
Even More Fun With DDF
Even More Fun With DDFEven More Fun With DDF
Even More Fun With DDFMartin Packer
 
Munich 2016 - Z011597 Martin Packer - How To Be A Better Performance Specialist
Munich 2016 - Z011597 Martin Packer - How To Be A Better Performance SpecialistMunich 2016 - Z011597 Martin Packer - How To Be A Better Performance Specialist
Munich 2016 - Z011597 Martin Packer - How To Be A Better Performance SpecialistMartin Packer
 
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topicsMunich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topicsMartin Packer
 
Munich 2016 - Z011599 Martin Packer - More Fun With DDF
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMunich 2016 - Z011599 Martin Packer - More Fun With DDF
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMartin Packer
 
Munich 2016 - Z011598 Martin Packer - He Picks On CICS
Munich 2016 - Z011598 Martin Packer - He Picks On CICSMunich 2016 - Z011598 Martin Packer - He Picks On CICS
Munich 2016 - Z011598 Martin Packer - He Picks On CICSMartin Packer
 
zIIP Capacity Planning
zIIP Capacity PlanningzIIP Capacity Planning
zIIP Capacity PlanningMartin Packer
 
Parallel Batch Performance Considerations
Parallel Batch Performance ConsiderationsParallel Batch Performance Considerations
Parallel Batch Performance ConsiderationsMartin Packer
 
zIIP Capacity Planning
zIIP Capacity PlanningzIIP Capacity Planning
zIIP Capacity PlanningMartin Packer
 
Life And Times Of An Address Space
Life And Times Of An Address SpaceLife And Times Of An Address Space
Life And Times Of An Address SpaceMartin Packer
 
I Know What You Did THIS Summer
I Know What You Did THIS SummerI Know What You Did THIS Summer
I Know What You Did THIS SummerMartin Packer
 
I Know What You Did Last Summer
I Know What You Did Last SummerI Know What You Did Last Summer
I Know What You Did Last SummerMartin Packer
 
Optimizing z/OS Batch
Optimizing z/OS BatchOptimizing z/OS Batch
Optimizing z/OS BatchMartin Packer
 
Memory Matters in 2011
Memory Matters in 2011Memory Matters in 2011
Memory Matters in 2011Martin Packer
 
Curt Cotner DDF Inactive Threads Support DB2 Version 3
Curt Cotner DDF Inactive Threads Support DB2 Version 3Curt Cotner DDF Inactive Threads Support DB2 Version 3
Curt Cotner DDF Inactive Threads Support DB2 Version 3Martin Packer
 
Coupling Facility CPU
Coupling Facility CPUCoupling Facility CPU
Coupling Facility CPUMartin Packer
 

More from Martin Packer (20)

zIIP Capacity Planning - May 2018
zIIP Capacity Planning - May 2018zIIP Capacity Planning - May 2018
zIIP Capacity Planning - May 2018
 
Even More Fun With DDF
Even More Fun With DDFEven More Fun With DDF
Even More Fun With DDF
 
Munich 2016 - Z011597 Martin Packer - How To Be A Better Performance Specialist
Munich 2016 - Z011597 Martin Packer - How To Be A Better Performance SpecialistMunich 2016 - Z011597 Martin Packer - How To Be A Better Performance Specialist
Munich 2016 - Z011597 Martin Packer - How To Be A Better Performance Specialist
 
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topicsMunich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
 
Munich 2016 - Z011599 Martin Packer - More Fun With DDF
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMunich 2016 - Z011599 Martin Packer - More Fun With DDF
Munich 2016 - Z011599 Martin Packer - More Fun With DDF
 
Munich 2016 - Z011598 Martin Packer - He Picks On CICS
Munich 2016 - Z011598 Martin Packer - He Picks On CICSMunich 2016 - Z011598 Martin Packer - He Picks On CICS
Munich 2016 - Z011598 Martin Packer - He Picks On CICS
 
zIIP Capacity Planning
zIIP Capacity PlanningzIIP Capacity Planning
zIIP Capacity Planning
 
Time For D.I.M.E?
Time For D.I.M.E?Time For D.I.M.E?
Time For D.I.M.E?
 
DB2 Through My Eyes
DB2 Through My EyesDB2 Through My Eyes
DB2 Through My Eyes
 
Parallel Batch Performance Considerations
Parallel Batch Performance ConsiderationsParallel Batch Performance Considerations
Parallel Batch Performance Considerations
 
zIIP Capacity Planning
zIIP Capacity PlanningzIIP Capacity Planning
zIIP Capacity Planning
 
Time For DIME
Time For DIMETime For DIME
Time For DIME
 
Life And Times Of An Address Space
Life And Times Of An Address SpaceLife And Times Of An Address Space
Life And Times Of An Address Space
 
I Know What You Did THIS Summer
I Know What You Did THIS SummerI Know What You Did THIS Summer
I Know What You Did THIS Summer
 
I Know What You Did Last Summer
I Know What You Did Last SummerI Know What You Did Last Summer
I Know What You Did Last Summer
 
Optimizing z/OS Batch
Optimizing z/OS BatchOptimizing z/OS Batch
Optimizing z/OS Batch
 
Memory Matters in 2011
Memory Matters in 2011Memory Matters in 2011
Memory Matters in 2011
 
Curt Cotner DDF Inactive Threads Support DB2 Version 3
Curt Cotner DDF Inactive Threads Support DB2 Version 3Curt Cotner DDF Inactive Threads Support DB2 Version 3
Curt Cotner DDF Inactive Threads Support DB2 Version 3
 
Coupling Facility CPU
Coupling Facility CPUCoupling Facility CPU
Coupling Facility CPU
 
Much Ado About CPU
Much Ado About CPUMuch Ado About CPU
Much Ado About CPU
 

Recently uploaded

DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
Visualising and forecasting stocks using Dash
Visualising and forecasting stocks using DashVisualising and forecasting stocks using Dash
Visualising and forecasting stocks using Dashnarutouzumaki53779
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your QueriesExploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your QueriesSanjay Willie
 
Fact vs. Fiction: Autodetecting Hallucinations in LLMs
Fact vs. Fiction: Autodetecting Hallucinations in LLMsFact vs. Fiction: Autodetecting Hallucinations in LLMs
Fact vs. Fiction: Autodetecting Hallucinations in LLMsZilliz
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 

Recently uploaded (20)

DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
Visualising and forecasting stocks using Dash
Visualising and forecasting stocks using DashVisualising and forecasting stocks using Dash
Visualising and forecasting stocks using Dash
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your QueriesExploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
Exploring ChatGPT Prompt Hacks To Maximally Optimise Your Queries
 
Fact vs. Fiction: Autodetecting Hallucinations in LLMs
Fact vs. Fiction: Autodetecting Hallucinations in LLMsFact vs. Fiction: Autodetecting Hallucinations in LLMs
Fact vs. Fiction: Autodetecting Hallucinations in LLMs
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 

DB2 Data Sharing Performance Beginners Guide

  • 1. IBM System z Technical University – Vienna , Austria – May 2-6 zZS26 DB2 Data Sharing Performance For Beginners Martin Packer © 2011 IBM Corporation
  • 2. IBM System z Technical University – Vienna , Austria – May 2-6 Abstract  This presentation provides an introductory-level view of how to look at the DB2 Data Sharing performance numbers – from both a z/OS / RMF and a DB2 perspective  Performance topics include – XCF – Coupling Facility – Data Sharing Structures – The application's perspective  Performance topics don’t include – Other forms of Data Sharing eg VSAM RLS – Overly detailed descriptions © 2011 IBM Corporation 2
  • 3. IBM System z Technical University – Vienna , Austria – May 2-6 Agenda  What’s The Point Of Data Sharing?  Introduction to Parallel Sysplex  Introduction to DB2 Data Sharing  Performance  Summary © 2011 IBM Corporation 3
  • 4. IBM System z Technical University – Vienna , Austria – May 2-6 What’s The Point of Data Sharing?  Higher Availability – Reduction in single points of failure • If configured properly • If effectively planned  Greater Scalability – Additional z/OS LPARs providing additional resources • Typically on different footprints • Potentially “on the fly” • Nearly linear – Additional DB2 subsystems providing more virtual storage  Software Licencing Considerations – eg Parallel Sysplex Licence Charge (PSLC) © 2011 IBM Corporation 4
  • 5. IBM System z Technical University – Vienna , Austria – May 2-6 Hot Standby Scenarios  Many installations configure redundant LPARs – Actively processing their share of the load – Spread 1 or more to 1 or more footprints – Examples: • 1 LPAR on each machine • 2 LPARs on each of two machines – When one Data Sharing member fails the rest service the work – Issues: • How do you avoid affinities? • How do you ensure work gets routed to the right surviving LPAR? • In general, what ARE your recovery policies? • How do you avoid the cost of too many LPARs? • Should we have hot standby DB2 Subsystems instead of LPARs? © 2011 IBM Corporation 5
  • 6. Technical Introduction © 2011 IBM Corporation
  • 7. IBM System z Technical University – Vienna , Austria – May 2-6 Major Parallel Sysplex Components Member 1 Member 2 Application XCF XCF Application Coupling Facility Structures Sysplex Timer © 2011 IBM Corporation 7
  • 8. IBM System z Technical University – Vienna , Austria – May 2-6 Major Parallel Sysplex Components  Coupling Facilities (CFs) – With structures – Usually more than one CF – Run CFCC code rather than z/OS  Members – Such as LPARs – Up to 32 members  Links between Members and CFs – Configured for bandwidth and availability  XCF – Provides a communications mechanism  Applications – Exploiting CF structures directly – Using XCF services  Timer – Ensures all members are synchronised – Traditionally Sysplex Timer – Strategically Server Time Protocol network  XES manages communications © 2011 IBM Corporation 8
  • 9. IBM System z Technical University – Vienna , Austria – May 2-6 XCF  General signalling mechanism – Introduced before the other Parallel Sysplex functions  Traffic divided into transport classes – These use either Coupling Facility structures or CTCs to pass messages • Dynamically routed based on XCF observing performance • Dedicated CTCs – Originally were faster than CF structures – Must define a pair of paths for each connection – Definition can get quite complex – Transport classes have a maximum message size • Fitting messages to classes is a significant tuning item – One message per buffer > Buffer space wasted if message smaller than the buffer > Additional processing if it’s bigger  Applications use specific XCF group names – Example: IXCLOxxx is XES lock resolution – Application address spaces connect as members of the group © 2011 IBM Corporation 9
  • 10. IBM System z Technical University – Vienna , Austria – May 2-6 Coupling Facilities  Usually on same footprint as z/OS, Linux or z/VM images – Called an Integrated Coupling Facility (ICF) – ICFs generally on (cheaper) PUs characterised as ICF PUs – Can be stand-alone – If on same footprint as z/OS images using the CF a footprint failure would bring down both z/OS images and the ICF  Speed of CF PUs relative to sharing z/OS image PUs important for performance – ICF PUs automatically matched to z/OS PUs on the same footprint  CF PUs can be shared – Generally reduces coupling performance • Especially if Dynamic Dispatch turned on – Only recommended for Development and Test Parallel Sysplexes  CFCC code releases called Coupling Facility Levels (CFLEVELS) © 2011 IBM Corporation 10
  • 11. IBM System z Technical University – Vienna , Austria – May 2-6 Coupling Facility Structures  Cache Structures – Data is cached • Requests await the return of data – Example: Enhanced Catalog Sharing • SYSIGGCAS_ECS  Lock Structures – Control granting of locks • Requests don’t await the return of the lock – Example:GRS Star • ISGLOCK  List Structures – Groups of “lists” • Which are more like arrays – Example: XCF • IXCPATH_CFP2  Serialized List Structures – Example Websphere MQ queues • MQPGCSQ_ADMIN  Different performance and availability characteristics for each exploiter © 2011 IBM Corporation 11
  • 12. IBM System z Technical University – Vienna , Austria – May 2-6 CFSIZER  Website that enables you to size CF structures – http://www.ibm.com/systems/z/cfsizer/ – Most IBM Product structures catered for • Given a product name it tells you which structures you want – And suggests a size – Produces an “initial” configuration • For example DB2 structures are likely to need tuning – In fact CFSIZER probably suggests to you too few GBPs – Has good Help – Recently updated © 2011 IBM Corporation 12
  • 13. IBM System z Technical University – Vienna , Austria – May 2-6 Coupling Facility Links  Different types: – Internal Coupling (IC) • Extremely fast, within one footprint – Integrated Cluster Bus ( ICB-4 and ICB-3) • Fast, very short-distance links between footprints – Inter-Systems Coupling (ISC-3) • Slower, longer distance – Speed decreases with distance • Needed for eg cross-town sysplexes – Infiniband Statement of Direction for System z10 EC  Redundancy and bandwidth are both important  Each generation of each of these provides greater capability – Typically speed – Generations tend to go with processor families  Typical configuration: – 2 footprints • Each has 1 or 2 member z/OS images • Each has 1 Internal Coupling Facility (ICF) image • IC links between members and ICF image on the same footprint – ISC links between a member and its remote ICF > ICB links would imply footprints physically very close © 2011 IBM Corporation 13
  • 14. IBM System z Technical University – Vienna , Austria – May 2-6 Synchronous and Asynchronous CF Requests  Synchronous (Sync) – z/OS engine waits for completion • Each microsecond of request service time is a microsecond of lost engine capacity – e.g GRS Star  Asynchronous (Async) – z/OS engine does not wait for completion – Response times usually longer than for Sync requests – e.g XCF signalling  Automatic Sync to Async Conversion – Algorithm introduced by z/OS Release 2 – Requests converted wholesale – With conversion an occasional request is tried as Sync • Governs whether conversion is the right thing to do • Factors – Larger data transfer – Longer / slower links – Processor speed – Duplexing – Thresholds recently refined © 2011 IBM Corporation 14
  • 15. IBM System z Technical University – Vienna , Austria – May 2-6 Structure Duplexing  2 copies of the same structure in different CFs – Maintained in sync – Higher resilience to component failures • Loss of z/OS images and ICF on same footprint less likely to cause an outage • Faster than structure rebuild  Bidirectional links required between the two CFs – Preferably more than one  User-Managed – “User” is DB2 – DB2 writes data to both structures • Async write to primary • Then Sync write to secondary • Completion when both writes succeed • Reads only from the Primary • In event of failure reads from Secondary  System-Managed – XES writes to primary and secondary • Both CFs communicate through a separate link to ensure status is shared • Request only completes when both structures have been accessed © 2011 IBM Corporation 15
  • 16. IBM System z Technical University – Vienna , Austria – May 2-6 CFLEVELS  Coupling Facility Code Releases – Loosely related to processor family – New functions and algorithm enhancements introduced this way • Eg CFLEVEL=11 is required for structure duplexing • Exploitation may require corequisite functions in eg z/OS or DB2 – Eg z/OS Release 2 + PTFs for System-Managed Duplexing  Footprints can have different CFLEVELS for each LPAR – An LPAR can only run one level – This facility to ease migration  Structures occasionally need to increase in size when upgrading  Useful information in http://www-03.ibm.com/servers/eserver/zseries/pso/cftable.html – Brief summary of CFLEVEL features – Matrix of processor support for each CFLEVEL  Latest CFLEVEL is 15 – More concurrent CFCC tasks – Base for reduced synchronisation traffic for Structure Duplexing – Structure-level CPU recording © 2011 IBM Corporation 16
  • 17. IBM System z Technical University – Vienna , Austria – May 2-6 Intelligent Resource Director  Manages resources within a “LPAR Cluster” – The members of a parallel sysplex on ONE machine • Physically impossible to move resources BETWEEN machines  Varies on and offline Logical CPs  Manages LPAR weights between members – The total for the cluster remains constant  Manages CHPIDs between members  Memory NOT managed  Decisions based on WLM Goal attainment and PR/SM’s view of resource utilisation  Helps “takeover” scenario – Eg LPAR weights move to the surviving member(s) on the machine © 2011 IBM Corporation 17
  • 18. IBM System z Technical University – Vienna , Austria – May 2-6 Workload Distribution Mechanism Examples  WLM – Batch Initiators – JES2 and JES3 – VTAM Generic Resources – Sysplex Distributor for TCP/IP  DB2 Data Sharing Group Attach  CPSM – Dynamic CICS workload management – Plus many other functions for managing CICS regions © 2011 IBM Corporation 18
  • 19. IBM System z Technical University – Vienna , Austria – May 2-6 Major DB2 Data Sharing Components Shared Data z/OS Image 1 z/OS Image 2 DB2 1 IRLM 1 XCF XCF IRLM 2 DB2 2 XCF Structures LOCK1 Coupling GBPs SCA Facility Sysplex Timer © 2011 IBM Corporation 19
  • 20. IBM System z Technical University – Vienna , Austria – May 2-6 Major DB2 Data Sharing Components  Locking – DB2 Subsystems in a group in one z/OS image share an IRLM address space – IRLMs communicate through their LOCK1 structure • groupname_LOCK1 – IRLMs also communicate via XCF • DXRnnn groups • XES locking services also use XCF – IXCLOnnn groups  Group Buffer pools – 1 CF structure per GBP • groupname_GBPn • Members connect directly to GBP structures  Shared Communications Area (SCA) – Status sharing • groupname_SCA • Much lower activity than LOCK1 and GBPs – Not usually considered for tuning • Members connect directly to SCA structure © 2011 IBM Corporation 20
  • 21. IBM System z Technical University – Vienna , Austria – May 2-6 Locking - LOCK1 Structure  Locks must be known and respected between members – Data Sharing uses global locks to achieve this  But not all locks need to be propagated to achieve this – Only the most restrictive state needs be propagated  Locking is propagated from IRLM, via XES to the LOCK1 CF structure – IRLM knows about locking states that XES doesn’t • XES only knows about “shared” and “exclusive” locks • DB2 had many more states, even before Data Sharing © 2011 IBM Corporation 21
  • 22. IBM System z Technical University – Vienna , Austria – May 2-6 Locking - LOCK1 Structure – LOCK1 Structure has two parts • Lock table aka “Hash table” – Each entry has: > Resource name > First system to have exclusive interest > Flags for each system that has shared interest – Entry size controls maximum number of connecting members > 2 bytes up to 6 members > 4 bytes up to 22 members – Generally fewer entries than there are resources to lock > Real resources hash to resource names • Modified resource list – Used for recovery purposes – Less interesting – from a tuning perspective © 2011 IBM Corporation 22
  • 23. IBM System z Technical University – Vienna , Austria – May 2-6 Locking - LOCK1 Structure  Contention types: – Real Contention • Different members really do need to use the same resource at the same time • Real application delay inherent while the holder retains the lock – XES Contention • When XES believes there is contention but IRLM knows there isn’t – because of its more comprehensive view of locking – IRLMs have to talk via XCF to resolve this - DXRnnn – False Contention • When the hashing algorithm for the lock table provides the same hash value for two different resources – XESs have to talk via XCF to resolve this - IXCLOnnn © 2011 IBM Corporation 23
  • 24. IBM System z Technical University – Vienna , Austria – May 2-6 Group Buffer Pools & Structures – How They Work - 1  Inter-DB2 Read / Write interest: – When one member has a write interest and at least one has a read interest – Tracked via Page Set Physical locks • Always propagated to the LOCK1 structure, even when only one member – Some cost for a single member data sharing group – If there is Inter-DB2 Read / Write interest in an object: • The buffer pools in the members cooperate via the corresponding Group Buffer Pool – “GBP Dependency” – Objects can go into and out of GBP Dependency • “Read Only Switching” (RO Switching) • Detected by Data Sharing Group – PCLOSET and PCLOSEN parameters affect how often to check • Affects GBP dependency – Trade off between GBP Dependency time and RO Switching rate © 2011 IBM Corporation 24
  • 25. IBM System z Technical University – Vienna , Austria – May 2-6 Group Buffer Pools & Structures – How They Work - 2  Updates cause Cross Invalidation – DB2 members check their copy of a page is valid before using it • Via a bitmap that each system’s XES maintains in memory – To use an invalidated page the member retrieves it from the GBP – Updates to the GBP generally happen at an application’s Commit point • Synchronously forcing changed pages to the GBP – “Force At Commit” • Sometimes we get writes to the GBP when locking at row level – To allow updated page to be retrieved by another member  Local pool misses search the GBP first – So the GBP can act as an extra layer of buffering – Retrieval would generally be quicker than from disk / cache  At intervals the Castout process purges some pages from the GBP – Written back out through the local DB2 subsystems – Threshold driven © 2011 IBM Corporation 25
  • 26. IBM System z Technical University – Vienna , Austria – May 2-6 Group Buffer Pools & Structures – How They Work - 3  Installation can control regimes – At Group Buffer Pool Level – By object  GBPCACHE(CHANGED) • Only the writes are cached  GBPCACHE(ALL) – Cached as they are read • Even if no Inter-DB2 R/W Interest – Writes also cached  GBPCACHE(NONE) – No cacheing done • Just used as a Cross-Invalidation mechanism  GBPCACHE(SYSTEM) – For a special kind of tablespace called a LOB – Only certain types of pages are cached © 2011 IBM Corporation 26
  • 27. Performance © 2011 IBM Corporation
  • 28. IBM System z Technical University – Vienna , Austria – May 2-6 Tuning Objectives  Response Times – Additional time could be caused by CF activities • Also additional locking problems  Throughput – For a given capacity throughput might be less  Minimised Cost – Minimise the configuration needed for data sharing • So long as that doesn’t conflict with other objectives  Robustness – Ensure performance doesn’t degrade with load © 2011 IBM Corporation 28
  • 29. IBM System z Technical University – Vienna , Austria – May 2-6 Parallel Sysplex Instrumentation  XCF: – SMF 74 Subtype 2 • RMF XCF Activity Report – Applications – Groups – Paths > CTCs are treated like real devices so SMF 74-1, 73 and 78-3 can be useful – Members > Job name in z/OS R.9 – DISPLAY XCF operator command  Coupling Facility – SMF 74 Subtype 4 • RMF Coupling Facility Activity Report – Usage Summary Section – Structure sizes and CPU usage – Structure Activity Section – Subchannel Activity Section – Path / Subchannel information – CF to CF Section – Duplexing traffic at the CF level © 2011 IBM Corporation 29
  • 30. IBM System z Technical University – Vienna , Austria – May 2-6 Data Sharing Instrumentation  Accounting Trace – Generally provides a time breakdown for each application • Plan, Correlation ID and Package level • Excellent tuning instrumentation for applications – Includes: • Global Lock Wait • Time to retrieve pages from GBP – Subsumed within Sync DB Wait and Async Read • Time for commits – Can involve GBP traffic – Activities • Group Buffer Pool • Global Locking  Statistics Trace – Activities • Group Buffer Pool – Note: MXG change 25.075 required to support incompatibly-changed DB2 Version 8 GBP statistics • Global Locking © 2011 IBM Corporation 30
  • 31. IBM System z Technical University – Vienna , Austria – May 2-6 Capacity and “White Space”  “White Space” is capacity which needs to be kept free for oncoming work from other Coupling Facilities – Memory, CPU and links – Duplexing reduces the need for this  Coupling Facility Control Code requires CPU utilisation below about 50% – Above that response times begin to degrade • With impact on coupling cost to z/OS images and on response times © 2011 IBM Corporation 31
  • 32. IBM System z Technical University – Vienna , Austria – May 2-6 Link Performance  Configure the fastest suitable links – Type: IC vs ICB vs ISC vs CIB vs CIB LR – Generation: eg ISC-3 vs ISC-2  Configure enough of them  Use Peer Mode  Monitor for – Signalling response times – Path Busy Conditions – Subchannel Busy Conditions – Request Failures © 2011 IBM Corporation 32
  • 33. IBM System z Technical University – Vienna , Austria – May 2-6 XCF Tuning  Aim to reduce transfer times – “Mean Transfer Time” MXFER TIME in RMF  Aim to minimise traffic – Rates at all levels in RMF – Eg Minimise Locking False Contention – Eg Set up GRS Star in a way that minimises ENQs  Optimise the use of links – More modern CF-based links tend to outperform CTCs • But CTCs still better for small messages • CTCs drive SAP utilisation – RMF counts the number of times each path was chosen • Understand why signals use the paths in the ratio they do  Transport Class buffer sizes – Buffers that are too big waste memory – Buffers that are too small have to be expanded • Sometimes with cost – RMF has counts of “Fit”, “Small”, “Big” and “Big With Overhead” messages – RMF lists transport classes and their maximum buffer size values © 2011 IBM Corporation 33
  • 34. IBM System z Technical University – Vienna , Austria – May 2-6 Group Buffer Pool Tuning  GBP tuning has similarities to local pool tuning – But some twists  Important to minimise traffic – Application GETPAGEs in general – Traffic to GBPs  Also important to minimise response times – Which is mainly a matter of tuning the underlying CF access  Minimising the amount of data actually shared may be practical – For many designs it isn’t  Important to avoid invalidations due to too few directory entries – GBP space divided into Directory entries and Data elements • Directory entry reclaims if too few entries – Causes invalidations of local buffers • Installation can alter the balance • Installation can increase the size of the group buffer pool © 2011 IBM Corporation 34
  • 35. IBM System z Technical University – Vienna , Austria – May 2-6 Group Buffer Pool Tuning - Traffic  Reads: – Cross invalidation reads • Data returned i.e. is in GBP • Data not returned i.e. is known to be down level but page not in the GBP – Requires disk read – Buffer pool miss reads • Data returned i.e not in the local pool but is in the GBP • Data not returned i.e is in neither the local pool nor the GBP – A bigger GBP ought to provide more hits and fewer misses • But I rarely see high GBP hit ratios  Writes: – Avoid GBPCACHE(NONE) as writes are SYNCHRONOUS TO DISK at Commit time • Harmful to the Committing unit of work’s performance – Writes can also be caused by the LOCAL pool’s Deferred Write thresholds being hit • In this case Commits aren’t waited for  Castouts: – Dribbling out a good idea • Just like for local pools © 2011 IBM Corporation 35
  • 36. IBM System z Technical University – Vienna , Austria – May 2-6 Locking Tuning  It’s important to reduce locking traffic at all levels – Application – DB2 subsystem  It’s also important to reduce False Contention – Usually by increasing the Lock Table portion of the LOCK1 structure • Number of entries will be a power of 2 – 4-byte lock table entry means fewer entries for same size than 2-byte  It’s nice that in DB2 Version 8 there’s a remapping of IRLM lock states to XES ones – May reduce XES lock contention  CF Request response times also important © 2011 IBM Corporation 36
  • 37. IBM System z Technical University – Vienna , Austria – May 2-6 Special Coupling Facility Commands  A number of special commands have been introduced to make CF requests more efficient – Generally have CFLEVEL prerequisites – READ_COCLASS • DB2 Version 6 – WARM • DB2 Version 8 – RFCOM • DB2 Version 8  DB2 Version 8 Statistics Trace instruments WARM and RFCOM © 2011 IBM Corporation 37
  • 38. IBM System z Technical University – Vienna , Austria – May 2-6 DB2 Version 9  Restart performance improved  Command to remove GBP Dependency at object level – ACCESS DB MODE(NGBPDEP) – Typical usage would be before batch run • Issue on member where you plan to run the job  Improved performance for GBP writes  DB2 overall health taken into account for WLM routing  Balance Group Attach connections across members on same LPAR – Usermod to Versions 7 and 8  etc… © 2011 IBM Corporation 38
  • 39. IBM System z Technical University – Vienna , Austria – May 2-6 Summary  Parallel Sysplex has many benefits – More fully realised with Data Sharing  Need to manage carefully performance and cost  Configuration choices make an enormous difference  Avoid shared coupling facilities for Production  Good monitoring tools for both z/OS / Hardware, and DB2  Tune not only DB2 structure and XCF Performance – But also other structures and users of XCF © 2011 IBM Corporation 39