MIMIX+Reference



Comments



Description

Version 6.0 MIMIX ha1™ and MIMIX ha Lite™ for IBM® i MIMIX Reference Published: March 2009 level: 6.0.06.00 Copyrights, Trademarks, and Notices Product conventions.................................................................................................. 16 Menus and commands ........................................................................................ 16 Accessing online help.......................................................................................... 16 Publication conventions............................................................................................. 16 Formatting for displays and commands .............................................................. 17 Sources for additional information............................................................................. 18 How to contact us...................................................................................................... 19 Chapter 1 MIMIX overview 21 MIMIX concepts......................................................................................................... 23 System roles and relationships ........................................................................... 23 Data groups: the unit of replication...................................................................... 24 Changing directions: switchable data groups ...................................................... 24 Additional switching capability ....................................................................... 25 Journaling and object auditing introduction ......................................................... 25 Log spaces .......................................................................................................... 26 Multi-part naming convention .............................................................................. 27 The MIMIX environment ............................................................................................ 29 The product library .............................................................................................. 29 IFS directories ............................................................................................... 29 Job descriptions and job classes......................................................................... 30 User profiles .................................................................................................. 32 The system manager........................................................................................... 32 The journal manager ........................................................................................... 33 The MIMIXQGPL library ...................................................................................... 34 MIMIXSBS subsystem................................................................................... 34 Data libraries ....................................................................................................... 34 Named definitions................................................................................................ 35 Data group entries ............................................................................................... 35 Journal receiver management................................................................................... 37 Interaction with other products that manage receivers........................................ 38 Processing from an earlier journal receiver ......................................................... 38 Considerations when journaling on target ........................................................... 39 Operational overview................................................................................................. 40 Support for starting and ending replication.......................................................... 40 Support for checking installation status ............................................................... 41 Support for automatically detecting and resolving problems ............................... 41 Support for working with data groups .................................................................. 41 Support for resolving problems ........................................................................... 42 Support for switching a data group...................................................................... 44 Support for working with messages .................................................................... 45 Replication process overview 46 Replication job and supporting job names ................................................................ 47 Cooperative processing introduction ......................................................................... 49 MIMIX Dynamic Apply ......................................................................................... 49 Legacy cooperative processing ........................................................................... 50 Advanced journaling ............................................................................................ 50 System journal replication ......................................................................................... 51 Processing self-contained activity entries ........................................................... 52 Chapter 2 2 Processing data-retrieval activity entries ............................................................. 53 Processes with multiple jobs ............................................................................... 55 Tracking object replication................................................................................... 55 Managing object auditing .................................................................................... 55 User journal replication.............................................................................................. 58 What is remote journaling?.................................................................................. 58 Benefits of using remote journaling with MIMIX .................................................. 58 Restrictions of MIMIX Remote Journal support ................................................... 59 Overview of IBM processing of remote journals .................................................. 60 Synchronous delivery .................................................................................... 60 Asynchronous delivery .................................................................................. 62 User journal replication processes ...................................................................... 63 The RJ link .......................................................................................................... 63 Sharing RJ links among data groups............................................................. 63 RJ links within and independently of data groups ......................................... 64 Differences between ENDDG and ENDRJLNK commands .......................... 64 RJ link monitors ................................................................................................... 65 RJ link monitors - operation........................................................................... 65 RJ link monitors in complex configurations ................................................... 65 Support for unconfirmed entries during a switch ................................................. 67 RJ link considerations when switching ................................................................ 67 User journal replication of IFS objects, data areas, data queues.............................. 69 Benefits of advanced journaling .......................................................................... 69 Replication processes used by advanced journaling .......................................... 70 Tracking entries ................................................................................................... 71 IFS object file identifiers (FIDs) ........................................................................... 72 Lesser-used processes for user journal replication................................................... 73 User journal replication with source-send processing ......................................... 73 The data area polling process ............................................................................. 74 Chapter 3 Preparing for MIMIX 76 Checklist: pre-configuration....................................................................................... 77 Data that should not be replicated............................................................................. 78 Planning for journaled IFS objects, data areas, and data queues............................. 79 Is user journal replication appropriate for your environment? ............................. 79 Serialized transactions with database files.......................................................... 79 Converting existing data groups .......................................................................... 79 Conversion examples .................................................................................... 80 Database apply session balancing ...................................................................... 81 User exit program considerations........................................................................ 81 Starting the MIMIXSBS subsystem ........................................................................... 83 Accessing the MIMIX Main Menu.............................................................................. 84 Planning choices and details by object class 86 Replication choices by object type ............................................................................ 88 Configured object auditing value for data group entries............................................ 89 Identifying library-based objects for replication ......................................................... 91 How MIMIX uses object entries to evaluate journal entries for replication .......... 92 Identifying spooled files for replication ................................................................ 93 Additional choices for spooled file replication................................................ 94 Chapter 4 3 Replicating user profiles and associated message queues ................................ 95 Identifying logical and physical files for replication.................................................... 96 Considerations for LF and PF files ...................................................................... 96 Files with LOBs.............................................................................................. 98 Configuration requirements for LF and PF files................................................... 99 Requirements and limitations of MIMIX Dynamic Apply.................................... 101 Requirements and limitations of legacy cooperative processing....................... 102 Identifying data areas and data queues for replication............................................ 103 Configuration requirements - data areas and data queues ............................... 103 Restrictions - user journal replication of data areas and data queues .............. 104 Identifying IFS objects for replication ...................................................................... 106 Supported IFS file systems and object types .................................................... 106 Considerations when identifying IFS objects..................................................... 107 MIMIX processing order for data group IFS entries..................................... 107 Long IFS path names .................................................................................. 107 Upper and lower case IFS object names..................................................... 107 Configured object auditing value for IFS objects ......................................... 108 Configuration requirements - IFS objects .......................................................... 108 Restrictions - user journal replication of IFS objects ......................................... 109 Identifying DLOs for replication ............................................................................... 111 How MIMIX uses DLO entries to evaluate journal entries for replication .......... 111 Sequence and priority order for documents ................................................ 111 Sequence and priority order for folders ....................................................... 112 Processing of newly created files and objects......................................................... 114 Newly created files ............................................................................................ 114 New file processing - MIMIX Dynamic Apply............................................... 114 New file processing - legacy cooperative processing.................................. 115 Newly created IFS objects, data areas, and data queues ................................. 115 Determining how an activity entry for a create operation was replicated .... 116 Processing variations for common operations ........................................................ 117 Move/rename operations - system journal replication ....................................... 117 Move/rename operations - user journaled data areas, data queues, IFS objects ... 118 Delete operations - files configured for legacy cooperative processing ............ 121 Delete operations - user journaled data areas, data queues, IFS objects ........ 121 Restore operations - user journaled data areas, data queues, IFS objects ...... 121 Chapter 5 Configuration checklists 123 Checklist: New remote journal (preferred) configuration ......................................... 125 Checklist: New MIMIX source-send configuration................................................... 128 Checklist: Converting to remote journaling.............................................................. 131 Converting to MIMIX Dynamic Apply....................................................................... 133 Converting using the Convert Data Group command ....................................... 133 Checklist: manually converting to MIMIX Dynamic Apply.................................. 134 Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling .................... 136 Checklist: Converting to legacy cooperative processing ......................................... 138 System-level communications 140 Configuring for native TCP/IP.................................................................................. 140 Port aliases-simple example ............................................................................. 141 Chapter 6 4 Port aliases-complex example .......................................................................... 142 Creating port aliases ......................................................................................... 143 Configuring APPC/SNA........................................................................................... 144 Configuring OptiConnect ......................................................................................... 144 Chapter 7 Configuring system definitions 146 Tips for system definition parameters ..................................................................... 147 Creating system definitions ..................................................................................... 150 Changing a system definition .................................................................................. 151 Multiple network system considerations.................................................................. 152 Configuring transfer definitions 154 Tips for transfer definition parameters..................................................................... 156 Using contextual (*ANY) transfer definitions ........................................................... 160 Search and selection process ........................................................................... 160 Considerations for remote journaling ................................................................ 161 Considerations for MIMIX source-send configurations...................................... 161 Naming conventions for contextual transfer definitions ..................................... 162 Additional usage considerations for contextual transfer definitions................... 162 Creating a transfer definition ................................................................................... 163 Changing a transfer definition ................................................................................. 165 Changing a transfer definition to support remote journaling.............................. 165 Finding the system database name for RDB directory entries ................................ 167 Using IBM i commands to work with RDB directory entries .............................. 167 Starting the TCP/IP server ...................................................................................... 168 Using autostart job entries to start the TCP server ................................................. 169 Identifying the current autostart job entry information ....................................... 169 Changing an autostart job entry and its related job description ........................ 169 Using a different job description for an autostart job entry .......................... 170 Updating host information for a user-managed autostart job entry ............. 170 Updating port information for a user-managed autostart job entry .............. 171 Verifying a communications link for system definitions ........................................... 173 Verifying the communications link for a data group................................................. 174 Verifying all communications links..................................................................... 174 Configuring journal definitions 176 Journal definitions created by other processes ....................................................... 178 Tips for journal definition parameters ...................................................................... 179 Journal definition considerations ............................................................................. 184 Naming convention for remote journaling environments with 2 systems........... 185 Example journal definitions for a switchable data group ............................. 185 Naming convention for multimanagement environments .................................. 187 Example journal definitions for three management nodes .......................... 188 Journal receiver size for replicating large object data ............................................. 191 Verifying journal receiver size options .............................................................. 191 Changing journal receiver size options ............................................................. 191 Creating a journal definition..................................................................................... 192 Changing a journal definition................................................................................... 194 Building the journaling environment ........................................................................ 195 Changing the journaling environment to use *MAXOPT3 ....................................... 196 Changing the remote journal environment .............................................................. 200 Chapter 8 Chapter 9 5 Adding a remote journal link.................................................................................... 202 Changing a remote journal link................................................................................ 204 Temporarily changing from RJ to MIMIX processing .............................................. 205 Changing from remote journaling to MIMIX processing .......................................... 206 Removing a remote journaling environment............................................................ 207 Chapter 10 Configuring data group definitions 209 Tips for data group parameters ............................................................................... 210 Additional considerations for data groups ......................................................... 220 Creating a data group definition .............................................................................. 221 Changing a data group definition ............................................................................ 225 Fine-tuning backlog warning thresholds for a data group ....................................... 225 Additional options: working with definitions 229 Copying a definition................................................................................................. 229 Deleting a definition................................................................................................. 230 Displaying a definition ............................................................................................. 231 Printing a definition.................................................................................................. 232 Renaming definitions............................................................................................... 232 Renaming a system definition ........................................................................... 232 Renaming a transfer definition .......................................................................... 235 Renaming a journal definition with considerations for RJ link ........................... 236 Renaming a data group definition ..................................................................... 237 Swapping system definition names ......................................................................... 238 Configuring data group entries 241 Creating data group object entries .......................................................................... 242 Loading data group object entries ..................................................................... 242 Adding or changing a data group object entry................................................... 243 Creating data group file entries ............................................................................... 246 Loading file entries ............................................................................................ 246 Loading file entries from a data group’s object entries ................................ 247 Loading file entries from a library ................................................................ 249 Loading file entries from a journal definition ................................................ 250 Loading file entries from another data group’s file entries........................... 251 Adding a data group file entry ........................................................................... 252 Changing a data group file entry ....................................................................... 253 Creating data group IFS entries .............................................................................. 255 Adding or changing a data group IFS entry....................................................... 255 Loading tracking entries .......................................................................................... 257 Loading IFS tracking entries.............................................................................. 257 Loading object tracking entries.......................................................................... 258 Creating data group DLO entries ............................................................................ 259 Loading DLO entries from a folder .................................................................... 259 Adding or changing a data group DLO entry ..................................................... 260 Creating data group data area entries..................................................................... 261 Loading data area entries for a library............................................................... 261 Adding or changing a data group data area entry ............................................. 262 Additional options: working with DG entries ............................................................ 263 Copying a data group entry ............................................................................... 263 Removing a data group entry ............................................................................ 264 Chapter 11 Chapter 12 6 Displaying a data group entry............................................................................ 265 Printing a data group entry ................................................................................ 265 Chapter 13 Additional supporting tasks for configuration 266 Accessing the Configuration Menu.......................................................................... 268 Starting the system and journal managers.............................................................. 269 Setting data group auditing values manually........................................................... 270 Examples of changing of an IFS object’s auditing value ................................... 271 Checking file entry configuration manually.............................................................. 276 Changes to startup programs.................................................................................. 278 Starting the DDM TCP/IP server ............................................................................. 279 Verifying that the DDM TCP/IP server is running .............................................. 279 Checking DDM password validation level in use..................................................... 280 Option 1. Enable MIMIXOWN user profile for DDM environment...................... 280 Option 2. Allow user profiles without passwords ............................................... 281 Starting data groups for the first time ...................................................................... 282 Identifying data groups that use an RJ link ............................................................. 283 Using file identifiers (FIDs) for IFS objects .............................................................. 284 Configuring restart times for MIMIX jobs ................................................................. 285 Configurable job restart time operation ............................................................. 285 Considerations for using *NONE ................................................................. 287 Examples: job restart time ................................................................................. 287 Restart time examples: system definitions .................................................. 288 Restart time examples: system and data group definition combinations..... 288 Configuring the restart time in a system definition ............................................ 291 Configuring the restart time in a data group definition....................................... 291 Starting, ending, and verifying journaling 293 What objects need to be journaled.......................................................................... 294 Authority requirements for starting journaling.................................................... 295 MIMIX commands for starting journaling................................................................. 296 Journaling for physical files ..................................................................................... 297 Displaying journaling status for physical files .................................................... 297 Starting journaling for physical files ................................................................... 297 Ending journaling for physical files .................................................................... 298 Verifying journaling for physical files ................................................................. 299 Journaling for IFS objects........................................................................................ 300 Displaying journaling status for IFS objects ...................................................... 300 Starting journaling for IFS objects ..................................................................... 300 Ending journaling for IFS objects ...................................................................... 301 Verifying journaling for IFS objects.................................................................... 302 Journaling for data areas and data queues............................................................. 303 Displaying journaling status for data areas and data queues............................ 303 Starting journaling for data areas and data queues .......................................... 303 Ending journaling for data areas and data queues............................................ 304 Verifying journaling for data areas and data queues ......................................... 305 Configuring for improved performance 306 Minimized journal entry data ................................................................................... 307 Restrictions of minimized journal entry data...................................................... 307 Configuring for minimized journal entry data ..................................................... 308 Chapter 14 Chapter 15 7 Configuring for high availability journal performance enhancements...................... 309 Journal standby state ........................................................................................ 309 Minimizing potential performance impacts of standby state ........................ 310 Journal caching ................................................................................................. 310 MIMIX processing of high availability journal performance enhancements....... 310 Requirements of high availability journal performance enhancements ............. 311 Restrictions of high availability journal performance enhancements................. 311 Caching extended attributes of *FILE objects ......................................................... 313 Increasing data returned in journal entry blocks by delaying RCVJRNE calls ........ 314 Understanding the data area format.................................................................. 314 Determining if the data area should be changed............................................... 315 Configuring the RCVJRNE call delay and block values .................................... 315 Configuring high volume objects for better performance......................................... 317 Improving performance of the #MBRRCDCNT audit .............................................. 318 Chapter 16 Configuring advanced replication techniques 320 Keyed replication..................................................................................................... 322 Keyed vs positional replication .......................................................................... 322 Requirements for keyed replication ................................................................... 322 Restrictions of keyed replication........................................................................ 323 Implementing keyed replication ......................................................................... 323 Changing a data group configuration to use keyed replication.................... 323 Changing a data group file entry to use keyed replication........................... 324 Verifying key attributes ...................................................................................... 326 Data distribution and data management scenarios ................................................. 327 Configuring for bi-directional flow ...................................................................... 327 Bi-directional requirements: system journal replication ............................... 327 Bi-directional requirements: user journal replication.................................... 328 Configuring for file routing and file combining ................................................... 329 Configuring for cascading distributions ............................................................. 331 Trigger support ........................................................................................................ 334 How MIMIX handles triggers ............................................................................. 334 Considerations when using triggers .................................................................. 334 Enabling trigger support .................................................................................... 335 Synchronizing files with triggers ........................................................................ 335 Constraint support ................................................................................................... 336 Referential constraints with delete rules............................................................ 336 Replication of constraint-induced modifications .......................................... 337 Handling SQL identity columns ............................................................................... 338 The identity column problem explained ............................................................. 338 When the SETIDCOLA command is useful....................................................... 339 SETIDCOLA command limitations .................................................................... 339 Alternative solutions .......................................................................................... 340 SETIDCOLA command details .......................................................................... 341 Usage notes ................................................................................................ 342 Examples of choosing a value for INCREMENTS....................................... 342 Checking for replication of tables with identity columns .................................... 343 Setting the identity column attribute for replicated files ..................................... 343 Collision resolution .................................................................................................. 345 Additional methods available with CR classes .................................................. 345 8 Requirements for using collision resolution ....................................................... 346 Working with collision resolution classes .......................................................... 347 Creating a collision resolution class ............................................................ 347 Changing a collision resolution class........................................................... 348 Deleting a collision resolution class............................................................. 348 Displaying a collision resolution class ......................................................... 348 Printing a collision resolution class.............................................................. 349 Omitting T-ZC content from system journal replication ........................................... 350 Configuration requirements and considerations for omitting T-ZC content ....... 351 Omit content (OMTDTA) and cooperative processing................................. 352 Omit content (OMTDTA) and comparison commands ................................ 352 Selecting an object retrieval delay........................................................................... 354 Object retrieval delay considerations and examples ......................................... 354 Configuring to replicate SQL stored procedures and user-defined functions.......... 356 Requirements for replicating SQL stored procedure operations ....................... 356 To replicate SQL stored procedure operations ................................................. 357 Using Save-While-Active in MIMIX.......................................................................... 358 Considerations for save-while-active................................................................. 358 Types of save-while-active options ................................................................... 359 Example configurations ..................................................................................... 359 Chapter 17 Object selection for Compare and Synchronize commands 360 Object selection process ......................................................................................... 360 Order precedence ............................................................................................. 362 Parameters for specifying object selectors.............................................................. 363 Object selection examples ...................................................................................... 368 Processing example with a data group and an object selection parameter ...... 368 Example subtree ............................................................................................... 371 Example Name pattern...................................................................................... 375 Example subtree for IFS objects ....................................................................... 376 Report types and output formats ............................................................................. 378 Spooled files ...................................................................................................... 378 Outfiles .............................................................................................................. 379 Comparing attributes 380 About the Compare Attributes commands .............................................................. 380 Choices for selecting objects to compare.......................................................... 381 Unique parameters ...................................................................................... 381 Choices for selecting attributes to compare ...................................................... 382 CMPFILA supported object attributes for *FILE objects .............................. 383 CMPOBJA supported object attributes for *FILE objects ............................ 383 Comparing file and member attributes .................................................................... 384 Comparing object attributes .................................................................................... 387 Comparing IFS object attributes.............................................................................. 390 Comparing DLO attributes....................................................................................... 393 Comparing file record counts and file member data 396 Comparing file record counts .................................................................................. 396 To compare file record counts ........................................................................... 397 Significant features for comparing file member data ............................................... 399 Repairing data ................................................................................................... 399 Chapter 18 Chapter 19 9 Active and non-active processing...................................................................... 399 Processing members held due to error ............................................................. 399 Additional features............................................................................................. 400 Considerations for using the CMPFILDTA command ............................................. 400 Recommendations and restrictions ................................................................... 400 Using the CMPFILDTA command with firewalls................................................ 401 Security considerations ..................................................................................... 401 Comparing allocated records to records not yet allocated ................................ 401 Comparing files with unique keys, triggers, and constraints ............................. 402 Avoiding issues with triggers ....................................................................... 402 Referential integrity considerations ............................................................. 403 Job priority .................................................................................................... 403 CMPFILDTA and network inactivity................................................................... 404 Specifying CMPFILDTA parameter values.............................................................. 404 Specifying file members to compare ................................................................. 404 Tips for specifying values for unique parameters .............................................. 405 Specifying the report type, output, and type of processing ............................... 408 System to receive output ............................................................................. 408 Interactive and batch processing................................................................. 408 Using the additional parameters........................................................................ 408 Advanced subset options for CMPFILDTA.............................................................. 410 Ending CMPFILDTA requests ................................................................................. 414 Comparing file member data - basic procedure (non-active) .................................. 415 Comparing and repairing file member data - basic procedure ................................ 418 Comparing and repairing file member data - members on hold (*HLDERR) .......... 421 Comparing file member data using active processing technology .......................... 424 Comparing file member data using subsetting options ........................................... 427 Chapter 20 Synchronizing data between systems 431 Considerations for synchronizing using MIMIX commands..................................... 433 Limiting the maximum sending size .................................................................. 433 Synchronizing user profiles ............................................................................... 433 Synchronizing user profiles with SYNCnnn commands .............................. 434 Synchronizing user profiles with the SNDNETOBJ command ................... 434 Missing system distribution directory entries automatically added .............. 435 Synchronizing large files and objects ................................................................ 435 Status changes caused by synchronizing ......................................................... 435 Synchronizing objects in an independent ASP.................................................. 436 About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 437 About synchronizing data group activity entries (SYNCDGACTE).......................... 438 About synchronizing file entries (SYNCDGFE command) ...................................... 439 About synchronizing tracking entries....................................................................... 441 Performing the initial synchronization...................................................................... 442 Establish a synchronization point ...................................................................... 442 Resources for synchronizing ............................................................................. 443 Using SYNCDG to perform the initial synchronization ............................................ 444 To perform the initial synchronization using the SYNCDG command defaults . 445 Verifying the initial synchronization ......................................................................... 447 Synchronizing database files................................................................................... 449 Synchronizing objects ............................................................................................. 451 10 To synchronize library-based objects associated with a data group ................. 451 To synchronize library-based objects without a data group .............................. 452 Synchronizing IFS objects....................................................................................... 455 To synchronize IFS objects associated with a data group ................................ 455 To synchronize IFS objects without a data group ............................................. 456 Synchronizing DLOs................................................................................................ 459 To synchronize DLOs associated with a data group ......................................... 459 To synchronize DLOs without a data group ...................................................... 460 Synchronizing data group activity entries................................................................ 462 Synchronizing tracking entries ................................................................................ 464 To synchronize an IFS tracking entry ................................................................ 464 To synchronize an object tracking entry ............................................................ 464 Sending library-based objects ................................................................................. 465 Sending IFS objects ................................................................................................ 467 Sending DLO objects .............................................................................................. 468 Chapter 21 Introduction to programming 469 Support for customizing........................................................................................... 470 User exit points.................................................................................................. 470 Collision resolution ............................................................................................ 470 Completion and escape messages for comparison commands ............................. 472 CMPFILA messages ......................................................................................... 472 CMPOBJA messages........................................................................................ 473 CMPIFSA messages ......................................................................................... 473 CMPDLOA messages ....................................................................................... 474 CMPRCDCNT messages .................................................................................. 474 CMPFILDTA messages..................................................................................... 475 Adding messages to the MIMIX message log ......................................................... 479 Output and batch guidelines.................................................................................... 480 General output considerations .......................................................................... 480 Output parameter ........................................................................................ 480 Display output.............................................................................................. 481 Print output .................................................................................................. 481 File output.................................................................................................... 483 General batch considerations............................................................................ 484 Batch (BATCH) parameter .......................................................................... 484 Job description (JOBD) parameter .............................................................. 484 Job name (JOB) parameter ......................................................................... 484 Displaying a list of commands in a library ............................................................... 485 Running commands on a remote system................................................................ 486 Benefits - RUNCMD and RUNCMDS commands ............................................. 486 Procedures for running commands RUNCMD, RUNCMDS.................................... 487 Running commands using a specific protocol ................................................... 487 Running commands using a MIMIX configuration element ............................... 489 Using lists of retrieve commands ............................................................................ 493 Changing command defaults................................................................................... 494 Customizing with exit point programs 495 Summary of exit points............................................................................................ 495 MIMIX user exit points ....................................................................................... 495 Chapter 22 11 MIMIX Monitor user exit points .......................................................................... 495 MIMIX Promoter user exit points ....................................................................... 496 Requesting customized user exit programs ...................................................... 497 Working with journal receiver management user exit points ................................... 498 Journal receiver management exit points.......................................................... 498 Change management exit points................................................................. 498 Delete management exit points ................................................................... 499 Requirements for journal receiver management exit programs................... 499 Journal receiver management exit program example ................................. 502 Appendix A Supported object types for system journal replication 505 Appendix B Copying configurations 508 Supported scenarios ............................................................................................... 508 Checklist: copy configuration................................................................................... 509 Copying configuration procedure ............................................................................ 513 Appendix C Configuring Intra communications 514 Manually configuring Intra using SNA ..................................................................... 515 Manually configuring Intra using TCP ..................................................................... 516 Appendix D MIMIX support for independent ASPs 518 Benefits of independent ASPs................................................................................. 519 Auxiliary storage pool concepts at a glance ............................................................ 519 Requirements for replicating from independent ASPs ............................................ 522 Limitations and restrictions for independent ASP support....................................... 522 Configuration planning tips for independent ASPs.................................................. 523 Journal and journal receiver considerations for independent ASPs .................. 524 Configuring IFS objects when using independent ASPs ................................... 524 Configuring library-based objects when using independent ASPs .................... 524 Avoiding unexpected changes to the library list ................................................ 525 Detecting independent ASP overflow conditions..................................................... 527 Appendix E Creating user-defined rules and notifications 528 What are rules and how they are used by auditing ................................................. 529 Requirements for using audits and rules................................................................. 530 Guidelines and recommendations for auditing ........................................................ 530 Considerations and recommendations for rules ................................................ 531 Replacement variables ................................................................................ 532 Rule-generated messages and notifications ............................................... 532 Creating user-defined rules ..................................................................................... 534 Example of a user-defined rule ......................................................................... 534 Creating user-generated notifications ..................................................................... 535 Example of a user-generated notification .......................................................... 536 Running user rules and rule groups programmatically............................................ 538 Example of creating a monitor to run a user rule .............................................. 538 MIMIX rule groups ................................................................................................... 539 Interpreting audit results 540 Resolving audit problems - MIMIX Availability Manager ......................................... 541 Resolving audit problems - 5250 emulator.............................................................. 543 Checking the job log of an audit .............................................................................. 545 Appendix F 12 Interpreting results for configuration data - #DGFE audit........................................ 546 Interpreting results of audits for record counts and file data ................................... 548 What differences were detected by #FILDTA.................................................... 548 What differences were detected by #MBRRCDCNT ......................................... 550 Interpreting results of audits that compare attributes .............................................. 551 What attribute differences were detected .......................................................... 552 Where was the difference detected................................................................... 554 What attributes were compared ........................................................................ 554 Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 556 Attributes compared and expected results - #OBJATR audit ............................ 561 Attributes compared and expected results - #IFSATR audit ............................. 569 Attributes compared and expected results - #DLOATR audit ........................... 571 Comparison results for journal status and other journal attributes .................... 573 How configured journaling settings are determined .................................... 576 Comparison results for auxiliary storage pool ID (*ASP)................................... 577 Comparison results for user profile status (*USRPRFSTS) .............................. 580 How configured user profile status is determined........................................ 581 Comparison results for user profile password (*PRFPWDIND)......................... 583 Appendix G Journal Codes and Error Codes 585 Journal entry codes for user journal transactions.................................................... 585 Journal entry codes for files .............................................................................. 585 Error codes for files in error ............................................................................... 587 Journal codes and entry types for journaled IFS objects .................................. 590 Journal codes and entry types for journaled data areas and data queues........ 590 Journal entry codes for system journal transactions ............................................... 592 Appendix H Outfile formats 595 Outfile support in MIMIX Availability Manager......................................................... 595 Work panels with outfile support ............................................................................. 596 MCAG outfile (WRKAG command) ......................................................................... 597 MCDTACRGE outfile (WRKDTARGE command) ................................................... 600 MCNODE outfile (WRKNODE command)............................................................... 602 MXCDGFE outfile (CHKDGFE command) .............................................................. 604 MXCMPDLOA outfile (CMPDLOA command)......................................................... 606 MXCMPFILA outfile (CMPFILA command) ............................................................. 608 MXCMPFILD outfile (CMPFILDTA command) ........................................................ 610 MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 613 MXCMPRCDC outfile (CMPRCDCNT command)................................................... 614 MXCMPIFSA outfile (CMPIFSA command) ............................................................ 617 MXCMPOBJA outfile (CMPOBJA command) ......................................................... 619 MXDGACT outfile (WRKDGACT command)........................................................... 621 MXDGACTE outfile (WRKDGACTE command)...................................................... 623 MXDGDAE outfile (WRKDGDAE command) .......................................................... 631 MXDGDFN outfile (WRKDGDFN command) .......................................................... 632 MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 640 MXDGFE outfile (WRKDGFE command)................................................................ 642 MXDGIFSE outfile (WRKDGIFSE command) ......................................................... 646 MXDGSTS outfile (WRKDG command) .................................................................. 648 WRKDG outfile SELECT statement examples .................................................. 670 13 WRKDG outfile example 1........................................................................... 670 WRKDG outfile example 2........................................................................... 670 WRKDG outfile example 3........................................................................... 671 WRKDG outfile example 4........................................................................... 671 MXDGOBJE outfile (WRKDGOBJE command) ...................................................... 674 MXDGTSP outfile (WRKDGTSP command) ........................................................... 677 MXJRNDFN outfile (WRKJRNDFN command) ....................................................... 680 MXRJLNK outfile (WRKRJLNK command) ............................................................. 684 MXSYSDFN outfile (WRKSYSDFN command)....................................................... 687 MXTFRDFN outfile (WRKTFRDFN command) ....................................................... 691 MZPRCDFN outfile (WRKPRCDFN command) ...................................................... 693 MZPRCE outfile (WRKPRCE command) ................................................................ 694 MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 697 MXDGOBJTE outfile (WRKDGOBJTE command).................................................. 699 Index 703 14 15 . you can add the product library name to your library list or you can qualify the command with the name of the product library. In text. The following topics illustrate formatting techniques that may be used in this book. 16 . the function is invoked immediately.Product conventions The conventions described here apply to all MIMIX products unless otherwise noted. To view help for a specific option. • • To view general help for a command. When there is a corresponding command for a menu option. Notes and Attentions are specialized formatting techniques that are used. Accessing online help MIMIX Availability Manager includes online help that is accessible from within the product. Some commands can be submitted in batch jobs. You can use either the menu option or the command to access the function. specialized styles and techniques distinguish information you see on a display from information you enter on a display or command line. or a menu. context sensitive online help is available for all MIMIX commands and displays. the system will prompt you for any required parameters. press F1 when the cursor is located in the area for which you want help. respectively. The position of your cursor determines what you will see. From a 5250 emulator. If you enter a command without parameters. a display. To issue a command from a command line outside of the menu interface. prompt. From any window within MIMIX Availability Manager. Publication conventions This book uses typography and specialized formatting to help you quickly identify the type of information you are reading. For example. press F1 when the cursor is at the top of the display. the command is shown at the far right of the display. If you enter the command with all of the required parameters. bold type identifies a new term whereas an underlined word highlights its importance. selecting the Help icon will open the help system and access help for the current window. Simply press F1 to view help. or column. to highlight a fact or to warn you of the potential for damage. Menus and commands Functionality for all MIMIX products is accessible from a common MIMIX Main Menu. The options you see on a given menu may vary according to which products are installed. DGDFN(name system1 system2) CHGVAR &RETURN &CONTINUE Convention Initial Capitalization Italic UPPERCASE System-defined mnemonic names for commands. Names of columns. (Column names are also shown in italic). commands. user-defined values Examples MIMIX Basic Main Menu Update Access Code command Page Up key The Status column The Start processes prompt The library-name value CHGUPSCFG command WARNMSG parameter The value *YES Type the command MIMIX and press Enter. and values. Examples showing programming code. keyboard keys.Publication conventions Formatting for displays and commands Table 1 shows the formatting used for the information you see on displays and command interfaces: Table 1. parameters. variables. the conventions of italic and UPPERCASE also apply. columns. Formatting examples for displays and commands Description Names of menus or displays. In instructions. monospace font Text that you enter into a 5250 emulator command line. prompts on displays. 17 . Sources for additional information This book refers to other published information. can be located in the IBM System i and i5/OS Information Center. SG24-5189 18 . and redbooks: • • • • • • • • • • Backup and Recovery Journal management DB2 Universal Database for IBM PowerTM Systems Database Programming Integrated File System Introduction Independent disk pools OptiConnect for OS/400 TCP/IP Setup IBM redbook Striving for Optimal Journal Performance on DB2 Universal Database for iSeries. SG24-5189 IBM redbook PowerTM Systems iASPs: A Guide to Moving Applications to Independent ASPs. The following information. books. SG24-6802 The following information may also be helpful if you use advanced journaling: • • • DB2 UDB for iSeries SQL Programming Concepts DB2 Universal Database for iSeries SQL Reference IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication. plus additional technical information. From the Information center you can access these IBM PowerTM Systems topics. SG24-6286 IBM redbook AS/400 Remote Journal Function for High Availability and Data Replication. If you are current on maintenance. 19 .How to contact us How to contact us For contact information. It is important to include product and version information whenever you report problems. If you use MIMIX Availability Manager. visit our Contact CustomerCare web page. you should also include the version information provided at the bottom of each MIMIX Availability Manager window. support for MIMIX products is also available when you log in to Support Central. 20 . you can quickly switch users to the backup system where they can continue using their applications. data areas. When the original production system is brought back online. This allows you to generate reports. and reference information for MIMIX ha1 and MIMIX ha Lite. integrated file system (IFS) objects. replicated databases and objects can be used for distributed processing. In the event of an outage. • Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating database files. In previous versions. One common use of MIMIX is to support a hot backup system to which operations can be switched in the event of a planned or unplanned outage. and document library object (DLOs) using the system journal. The system journal replication path handles replication of critical system objects (such as user profiles. MIMIX uses two replication paths to address different pieces of your replication needs. MIMIX continuously captures changes to critical database files and objects on a production system. When configuring this path. • The user journal replication path captures changes to critical files and objects configured for replication through a user journal. For simplicity.CHAPTER 1 MIMIX overview This book provides concepts. allowing you to off-load applications to a backup system. and applies the changes to the appropriate database file or object on the backup system. MIMIX captures changes on the backup system for later synchronization with the original production system. You can view the replicated data on the backup system at any time without affecting productivity. These paths operate with configurable levels of cooperation or can operate independently. More complex environments have 21 . or spooled files). this book uses the term MIMIX to refer to the functionality provided by either product unless a more specific name is necessary. In addition to real-time backup capability. its backup is already prepared for users. IFS objects. MIMIX version 6 provides high availability for your critical data in a production environment on IBM PowerTM Systems through real-time replication of changes. and data queues. MIMIX DB2 Replicator provided this function. Simple environments have one production system and one backup system. If a production system becomes unavailable. sends the changes to a backup system. submit (read-only) batch jobs. Typically MIMIX is used among systems in a network. configuration procedures. or perform backups to tape from the backup system. program objects. The backup system stores exact duplicates of the critical database files and objects from the production system. In previous versions MIMIX Object Replicator provided this function. shipped defaults use the remote journaling function of the operating system to simplify sending data to the remote system. MIMIX assists you with analysis and synchronization of the database files and other objects. MIMIX also provides a means of verifying that the files and objects being replicated are what is defined to your configuration. “Operational overview” on page 40 provides information about day to day MIMIX operations. 22 .MIMIX overview multiple production systems or backup systems. “Journal receiver management” on page 37 describes how MIMIX performs change management and delete management for replication processes. This can help ensure the integrity of your MIMIX configuration. The topics in this chapter include: • • • • “MIMIX concepts” on page 23 describes concepts and terminology that you need to know about MIMIX. MIMIX can also be used on a single system. “The MIMIX environment” on page 29 describes components of the MIMIX operating environment. MIMIX automatically monitors your replication environment to detect and correct potential problems that could be detrimental to maintaining high availability. 23 . A MIMIX installation is a network of systems that transfer data and objects among each other using functions of a common MIMIX product. In normal operations. it is important to correctly identify the instance to which you are referring. In normal operations.MIMIX concepts MIMIX concepts This topic identifies concepts and terminology that are fundamental to how MIMIX performs replication. for normal operations in basic two-system environment. You should be familiar with the relationships between systems. if you switch application processing to the backup system. replication occurs between two or more systems. In an availability management context. A MIMIX installation is defined by the way in which you configure the MIMIX product for each of the participating systems. A target system is the system on which MIMIX replication activity between two systems completes. the source system contains the journal entries used for replication. A system can participate in multiple independent MIMIX installations. A backup system is the system that is not currently running the production workload for the applications. the production system is the system on which the principal copy of the data and objects associated with the application exist. The terms production system and backup system are used to describe the role of a system relative to the way applications are used on that system. The most common scenario for replication is a two-system environment in which one system is used for production activities and the other system is used as a backup system. A source system is the system from which MIMIX replication activity between two systems originates. if a payroll application on system CHICAGO is backed up on system LONDON and another application on system LONDON is backed up to the CHICAGO system. In replication. the backup system is the system on which you maintain a copy of the data and objects associated with the application. It is helpful to consider each installation of MIMIX on a system as being part of a separate network that is referred to as a MIMIX installation. the terms production system and backup system may not be sufficient to clearly identify a specific system or its current role in the replication process. For example. a production system is the system currently running the production workload for the applications. the backup system temporarily becomes the production system. These roles are not always associated with a specific system. the concepts of data groups and switching. replicated data flows from the system running the production workload to the backup system. both systems are acting as production systems and as backup systems at the same time. Typically. For example. In a more complex environment. and role of the IBM i journaling function in replication. The terms source system and target system identify the direction in which an activity occurs between two participating systems. Because multiple instances of MIMIX can be installed on any system. Information from the journal entries is either replicated to the target system or used to identify objects to be replicated to the target system. System roles and relationships Usually. when a production system is removed from the network for planned downtime. you can limit a data group to replicate using only one replication path. A data group may represent an application. A network system is any system in a MIMIX installation that is not designated as the management system (control point) of that MIMIX installation. A management system is the system in a MIMIX installation that is designated as the control point for all installations of the product within the MIMIX installation. A data group entry identifies a source of information that can be replicated. These roles remain associated with the system within the MIMIX installation to which they are defined. you specify which of the two systems in the data group is the source for replicated data. Application environments may define a data group as a specific set of files and objects. Users can start and stop replication activity by data group. For example.The terms management system and network system define the role of a system relative to how the products interact within a MIMIX installation. If you are using both user journal and system journal replication. The parameters in the data group definition identify the direction in which data is allowed to flow between systems and whether to allow the flow to switch directions. data groups support replication from both the system journal and the user journal. and display replication status by data group. Once a data group definition is created. DLOs. library-based objects. In normal operation. Changing directions: switchable data groups When you configure a data group definition. Often the system defined as the management system also serves as the backup system during normal operations. a set of one or more libraries. 24 . for example. library-based objects. Typically one system in the MIMIX installation is designated as the management system and the remaining one or more systems are designated as network systems. or a combination thereof that defines a unit of work by which MIMIX replication activity is controlled. Work definitions are automatically distributed from the management system to a network system. By default. or all of the critical data on a given system. you can define data group entries. MIMIX uses the data group entries that you create during configuration to determine whether a journal entry should be replicated. The replication process is started and ended by operations on a data group. A data group is a logical grouping of database files. IFS objects. Data groups: the unit of replication The concept of a data group is used to control replication activities. You also define the data to be replicated and many other characteristics the replication process uses on the defined data. and DLOs. a data group can have any combination of entries for files. data flows between two systems in the direction defined within the data group. The management system is the location from which work to be performed by the product is defined and maintained. Often a system defined as a network system also serves as the production system during normal operations. switch the direction of replication for a data group. Optionally. default values in the data group definition allow the same data group to be used for replication from either direction. When you need to switch the direction of replication. the R/3 environment defines a data group as a set of SQL tables that all use the same journal and which are all replicated to the same system. IFS objects. Journaling must be active before MIMIX can perform replication. Additional switching capability Typically. For some configurations. a record in a journal receiver. MIMIX uses the recorded journal entries to replicate activity to a designated system. switching is performed by using the MIMIX Switch Assistant. including those made by a system or user function. logged events in a user journal can be on a remote system using remote journaling. MIMIX ha1 and MIMIX ha Lite include MIMIX Monitor. Object auditing is the process by which the system creates audit records for specified types of access to objects. MIMIX Switch Assistant calls your default MIMIX Model Switch Framework to control the switching process. or when a security-relevant event occurs. QAUDJRN). whereby the journal and journal receiver exist on a remote system or on a different logical partition.MIMIX concepts MIMIX provides support for switching due to planned and unplanned events. When you perform switching in this manner. Variations in the replication process are optimized according to characteristics of the information provided by each of these functions. Optionally. Data group entries and other data group configuration settings determine whether MIMIX replicates activity for objects and whether replication is performed based on entries logged to the system journal or to a user journal. Note: A switchable data group is different than bi-directional data flow. Your authorized MIMIX representative can assist you in implementing advanced switching scenarios. the system logs identifying information about the event as a journal entry. which provides support for the MIMIX Model Switch Framework. MIMIX Switch Assistant provides a user interface that prompts you through the switch process. Switching support in MIMIX Monitor includes logical and physical switching. the Switch Data Group (SWTDG) command will switch the direction in which replication occurs between systems. the exit programs called by your implementation of MIMIX Model Switch Framework must include the SWTDG command. 25 . Journaling is the process of recording information about changes to user-identified objects. Object auditing logs events in a specialized system journal (the security audit journal. Events are logged in a user journal. remote journaling. you can customize monitoring and switching programs. Journaling and object auditing introduction MIMIX relies on data recorded by the IBM i functions of journaling. Each of these functions record information in a journal. At the data group level. see the Using MIMIX Monitor book. For more information. When an event occurs to an object or database file for which journaling is enabled. Through this support. The journal receiver is associated with a journal and contains the log of all activity for objects defined to the journal or all objects for which an audit trail is kept. MIMIX uses entries from both journals. and object auditing. for a limited number of object types. Bi-directional data flow is a data sharing technique described in “Configuring advanced replication techniques” on page 320. Journal entries deposited into the system journal (on behalf of an audited object) contain only an indication of a change to an object. For additional information. The maximum includes: • • • Objects for which changes are currently being journaled Objects for which journaling was ended while the current receiver is attached Journal receivers that are. Journal entries deposited into a user journal (on behalf of a journaled file. 26 . data area. Some of these types of entries contain enough information needed by MIMIX to apply the change directly to the replicated object on the target system. Journals created by MIMIX have a maximum of 250. the journal receivers must reside in the same primary independent ASP or a secondary independent ASP within the same ASP group. This information is needed by MIMIX in order to apply the change directly to the replicated object on the target system. MIMIX can use existing journals with this value. IBM i requires that journaled objects reside in the same auxiliary storage pool (ASP) as the user journal. see “Journal receiver management” on page 37. The IBM i remote journal function controls where it starts sending entries from the source journal receiver to the remote journal receiver. or IFS object) contain images of the data which was changed. Remote journaling requires unique considerations for journaling and journal receiver management. All internal structures and objects that make up a log space are created and manipulated by MIMIX. a log space is a MIMIX object that provides an efficient storage and manipulation mechanism for replicated data that is temporarily stored on the target system during the receive and apply processes. IBM i (V5R4 and higher releases) allows journaling a maximum of 10. The journal receivers can be in a different ASP.000 objects. the specified sequence number and receiver name is the starting point for MIMIX processing from the remote journal. associated with the journal while the current journal receiver is attached. or were. the start request (STRDG command) identifies a sequence number within a journal receiver at which MIMIX processing begins.000. data queue. When replication is started. In data groups configured with remote journaling.000 objects to one user journal. If the journal is in a primary independent ASP. User journaling will not start if the number of objects associated with the journal exceeds the journal maximum. however many types of these entries require MIMIX to gather additional information about the object from the source system in order to apply the change directly to the replicated object on the target system. Log spaces Based on user space objects (*USRSPC). The first element. if the data group definition INVENTORY CHICAGO HONGKONG is the only data group with the name INVENTORY. Together the elements of the multi-part name define the entire environment for that definition. When using command interfaces which require a data group definition. which has unique requirements for naming data group definitions. the name INVENTORY by itself no longer describes a unique data group. specifying INVENTORY HONGKONG will generate a “not found” error because HONGKONG is not the first system in any of the data group definitions. The multi-part name consists of a name followed by one or two participating system names (actually. Newly created data groups use remote journaling as the default configuration. names of system definitions). a fully-qualified two-part or three-part name must be unique. if a second data group named INVENTORY NEWYORK LONDON is created. This applies to all external interfaces that reference multi-part definition names. MIMIX automatically creates a journal definition for the security audit journal when you create a system definition. If MIMIX cannot find the transfer 27 . For more information. MIMIX uses the context of the operation to determine the fully qualified name. MIMIX can derive the fully-qualified name of a data group definition if a partial name provided is sufficient to determine the unique name. the order of the system names is also important. This includes a two-part name for journal definitions and a threepart name for transfer definitions and data group definitions. If the first part of the name is unique. and the specified transfer definition name to derive the fully qualified transfer definition name. For example. so the name alone is not unique. For example. then specifying INVENTORY on any command requiring a data group name is sufficient. As a whole unit. However. when starting a data group. the name. A multi-part. such as QAUDJRN CHICAGO or QAUDJRN NEWYORK. The system HONGKONG appears in only one of the data groups definitions. When a subsequent operation requires the transfer definition. MIMIX uses information in the data group definition. it can be used by itself to designate the data group definition. since two valid definitions may share the same three elements but with the system names in different orders. then the fully qualified name would be required to uniquely identify the data group. does not need to be unique. Each of these journal definitions is named QAUDJRN. MIMIX can also derive a fully qualified name for a transfer definition.Multi-part naming convention MIMIX uses named definitions to identify related user-defined configuration information. INVENTORY CHICAGO would be the minimum parts of the name of the first data definition necessary to determine its uniqueness. In a three-part name. The name must be qualified with the name of the system to which the journal definition applies. Data group definitions and system definitions include parameters that identify associated transfer definitions. However. For example. The order in which the systems are identified is also important. the data group definitions INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO are unique because of the order of the system names. qualified naming convention uniquely describes certain types of definitions. If a third data group named INVENTORY CHICAGO LONDON was added. Similarly. see “Naming convention for remote journaling environments with 2 systems” on page 185. the systems specified in the data group name. You can also use contextual system support (*ANY) to configure transfer definitions. When you specify *ANY in a transfer definition. Unlike the conventional configuration case. 28 . a specific search order is used if MIMIX is still unable to find an appropriate transfer definition. it reverses the order of the system names and checks again. For more information. see “Using contextual (*ANY) transfer definitions” on page 160. MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system.definition. avoiding the need for redundant transfer definitions. Several items are shipped as part of the product library. This topic describes each of the components of the MIMIX operating environment. see “Data that should not be replicated” on page 78. Each MIMIX installation also contains several default job descriptions and job classes within its library. The VvRrMm value is the same as the release of License Manager on the system.The MIMIX environment The MIMIX environment A variety of product-defined operating elements and user-defined configuration elements collectively form an operational environment on each system. For additional information. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. Two structures that you should be aware of are: /LakeviewTech/Service/MIMIX/VvRrMm/ is the recommended location for users to place fixes downloaded from the website. 29 . The directories created when License Manager is installed or upgraded follow these guidelines: /LakeviewTech This is the root directory for all IFS-based objects. The default name of the product installation library is MIMIX. /LakeviewTech/system-based-area This directory structure contains system-based objects that need to exist only once on a system. Over time. /LakeviewTech/Upgrades/ is where the MIMIX Installation Wizard places software packages that it uploads to the system. Also do not place user created objects in this library. The product library The name of the product library into which MIMIX is installed defines the connection among systems in the same MIMIX installation. A MIMIX environment can be comprised of one or more MIMIX installations. Each system that participates in the same MIMIX environment must have the same operational environment. Note: Do not replicate the library in which MIMIX is installed or any other libraries created by MIMIX. the installation processes for products and fixes will restore objects to the IFS directory structure as well as to the QSYS library. IFS directories A default IFS directory structure is used in conjunction with the library-based objects of the MIMIX family of products. The IFS directory structure is associated with the product library for the MIMIX installation and is created during the installation process for License Manager and MIMIX. /LakeviewTech/UserData/ is available to users to store product-related data. The systembased-area represents a unique directory for each set of objects. Multiple VvRrMm directories will exist as the release of License Manager changes. /LakeivewTech/ISC/ contains artifacts which enable the Vision Solutions plugin to appear in IBM Systems Director Navigator for IBM i5/OS under the category of i5/OS Management. Customized job descriptions optimize characteristics for a category of jobs. Job descriptions and job classes MIMIX uses a customized set of job descriptions and job classes. shows a combined list of MIMIX job descriptions. Name Job descriptions used by MIMIX Description Shipped in Installation Library X Shipped in MIMIXQGPL Library MXAUDIT MIMIX Auditing. MXDFT X 30 . these job descriptions are automatically restored in the product library. along with these job descriptions. Used for MIMIX load commands and by other commands that do not have a specific job description as the default value on the JOBD parameter. Used for MIMIX compare commands. is automatically restored on the system when a MIMIX product is installed. Compare File Data (CMPFILDTA). Jobs and related output are associated with the user profile submitting the request. Customized job classes optimize runtime characteristics such as the job priority and CPU time slice for a category of jobs. job queue. The structure is determined by the set of objects needed by an area of the product and the product installation library. /LakeviewTech/MIMIX/product-installation-library/productarea There is a unique directory structure for each installation of MIMIX. Commands such as Compare File Attributes (CMPFILA). Older commands that provide job description support for batch processing use different job descriptions that are located in the MIMIXQGPL library. MIMIX Default. All of the shipped job descriptions and job classes are configured with recommended default values. Table 2. as the default value on the Job description (JOBD) parameter. such as those called by MIMIX audits. and routing data for the job. MXSYNC. MXAUDIT. as well as numerous others support this standard. Synchronize Object (SYNCOBJ). Installing additional MIMIX installations on the same system does not create additional copies of these job descriptions. Table 2. message logging level. The directories created when MIMIX is installed or upgraded follow these guidelines. When MIMIX is installed. These job descriptions exist in the product library of each MIMIX installation. The MIMIXQGPL library. The requirements of your MIMIX environment determine the structure of these directories: /LakeviewTech/MIMIX/product-installation-library There is a unique directory structure for each installation of MIMIX. and MXDFT. including the user profile. Job descriptions control batch processing. MIMIX features use a set of default job descriptions. Characters nnnnn in the name identify the server port. container send. MIMIX UPS Monitor. object send. MIMIX TCP Server. Used for MIMIX synchronization commands. object retrieve. MIMIX Monitor. Used for MIMIX system manager and journal manager jobs. MIMIX Default. MIMIX Synchronization. Used for all MIMIX jobs that do not have a specific job description. MIMIXAPY MIMIXCMN MIMIXDFT MIMIXMGR MIMIXMON MIMIXPRM MIMIXRGZ MIMIXSND MIMIXSYNC X X X X X X X X X MIMIXUPS X MIMIXVFY X PORTnnnnn or alias name X1 1. Used for file reorganization jobs submitted by the database apply job. Used for database send. Used for MIMIX apply process jobs. MIMIX Apply. MIMIX Communications. and status send jobs in MIMIX. MIMIX Manager. A job description exists for each transfer definition which uses TCP protocol and enables MIMIX to create and manage autostart job entries. Used for MIMIX verify and compare command processes. The job descriptions are created in the installation library when transfer definitions which specify PROTOCOL(*TCP) and MNGAJE(*YES) are created or changed. such as those called by MIMIX audits. Used for the uninterruptible power source (UPS) monitor managed by the MIMIX Monitor product. The associated autostart job entries are added to the subsystem description for the MIMIXSBS subsystem in library MIMIXQGPL. where nnnnn identifies the server port number or alias. MIMIX Reorganize File. Used for MIMIX file synchronization. 31 . MIMIX Send. MIMIX Verify. Used for jobs submitted by the MIMIX Promoter product. Used for most jobs submitted by the MIMIX Monitor product. Name Job descriptions used by MIMIX Description Shipped in Installation Library X Shipped in MIMIXQGPL Library MXSYNC MIMIX Synchronization. as the default value on the JOBD parameter. MIMIX Promoter. This is valid for synchronize commands that do not have a JOBD parameter on the display. This is valid for verify and compare commands that do not have a JOBD parameter on the display. Used for all target communication jobs.The MIMIX environment Table 2. The profile is created with sufficient authority to run all MIMIX products and perform all the functions provided by the MIMIX products. including cleanup of the system and data group history files. Once started. The system manager also gathers messages and timestamp information from the network system and places them in a message log and timestamp file on the management system. Reducing the authority of the MIMIXOWN requires significant effort by the user to ensure that the products continue to function properly and to avoid adversely affecting the performance of MIMIX products. This profile owns all MIMIX objects.User profiles All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user profile. Dynamic status changes are also collected and returned to the management system. there are four pairs of system manager jobs. Since each pair has a send side system manager job and a receiver side system manager job. These jobs must be active to enable replication. In addition. System manager jobs in a MIMIX installation with one management system and 32 . Note: Do not replicate the MIMIXOWN or LAKEVIEW user profiles. The authority of this user profile can be reduced. Each pair has a send side system manager job and a receiver side system manager job. there are eight total system manager jobs in this installation. the system manager monitors for configuration changes and automatically moves any configuration changes to the network system. the system manager performs periodic maintenance tasks. Each arrow represents a pair of system manager jobs. The system manager The system manager consists of a pair of system management communication jobs between a management system and a network system. if business practices require. In this installation. see “Data that should not be replicated” on page 78. but this is not recommended. Figure 1. two between the first network system and the management system and two between the second network system and the management system. including the objects in the MIMIX product libraries and in the MIMIXQGPL library. Figure 1 shows a MIMIX installation with a management system and two network systems. See the License and Availability Manager book for additional security information for the MIMIXOWN user profile. For additional information. A journal manager job runs on each system in a MIMIX installation.m. System manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment.).The MIMIX environment two network systems. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a. The journal manager The journal manager is the process by which MIMIX maintains journal receivers on a system. The System manager delay parameter in the system definition determines how frequently the system manager looks for work. For more information. Other parameters in the system definition control other aspects of system manager operation. If you have a MIMIX installation with a management system and two network systems. see the section “Configuring restart times for MIMIX jobs” on page 285. you 33 . MIMIX determines when to restart the system managers based on the value of the Job restart time parameter in the system definitions for the network and management systems. For more information. you may find additional objects in the MIMIXQGPL library. For more information. a library named MIMIXQGPL is restored on the system. they may be deleted during the next installation process. see “Journal receiver management” on page 37. Journal manager jobs are included in a group of jobs that MIMIX automatically restarts daily to maintain the MIMIX environment. see “Journal definition considerations” on page 184. The Journal manager delay parameter in the system definition determines how frequently the journal manager looks for work. The MIMIXQGPL library When a MIMIX product is installed. The default operation of MIMIX is to restart these MIMIX jobs at midnight (12:00 a. These objects include the MIMIXSBS subsystem and a variety of job descriptions and job classes.m. Parameters in a journal definition allow you to customize details of how the change and delete operations are performed. The MIMIXQGPL library includes work management objects used by all MIMIX products. MIMIX creates the first data library when needed and may create additional data libraries. Note: If you have previous releases of MIMIX products on a system. one on each system.have three journal manager jobs. see the section “Configuring restart times for MIMIX jobs” on page 285. MIMIX performs both change management and delete management for journal receivers used by the replication process. see “Data that should not be replicated” on page 78. If you place objects in these libraries. A for ASP 1. where x is derived from the ASP. This subsystem is shipped with the proper job queue entries and routing entries for correct operation of the MIMIX jobs. do not replicate the MIMIXQGPL library. Currently there are two series of data libraries: • MIMIX uses data libraries for storing the contents of the object cache. The Job restart time parameter in the system definition determines when the journal manager for that system restarts. Data libraries MIMIX uses the concept of data libraries. By default. MIMIXSBS subsystem The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related processing. B for ASP 2. Many of these objects are customized and shipped with default settings designed to streamline operations for the products which use them. These ASP-specific data libraries are created when needed and are not deleted until the product is uninstalled. Also. • 34 . The names of data libraries are of the form product-library_n (where n is a number starting at 1). however you should not place objects in this library. For example. For additional information. For system journal replication. MIMIX creates libraries named product-library_x. For more information.). The naming conventions used within definitions are described in “Multi-part naming convention” on page 27. One or more or each of the following definitions are required to perform replication: A system definition identifies to MIMIX the characteristics of a system that participates in a MIMIX installation. using the transfer definition defined in the data group. A data group definition determines the direction in which replication occurs between the systems. MIMIX uses the data group entries that you create during configuration to determine whether or not a journal entry should be replicated. Data group object entries This type of entry allows you to identify library-based objects for replication. user profiles. journal information. and whether the IBM i remote journal function sends journal entries asynchronously or synchronously. message queues. A transfer definition identifies to MIMIX the communications path and protocol to be used between two systems. When a data group is added. You can create named definitions for system information. and non-journaled database files. Within a file entry. you can override the default file entry options defined for the data group. A remote journal link (RJ link) is a MIMIX configuration element that identifies an IBM i remote journaling environment. A journal definition identifies to MIMIX a journal environment on a particular system. MIMIX supports Systems Network Architecture (SNA). and the default processing characteristics to use when processing the database and object information associated with the data group. To select these • 35 . Examples of library-based objects include programs. MIMIX supports both positional and keyed access paths for accessing records stored in a physical file. MIMIX only replicates transactions for physical files because a physical file contains the actual data stored in members. the ADDRJLNK command is run automatically.The MIMIX environment Named definitions MIMIX uses named definitions to identify related user-defined configuration information. primary and secondary transfer definitions for the communications path used by MIMIX. Data group entries Data group entries are part of the MIMIX environment that must exist on each system in a MIMIX installation. and replication (data group) information. An RJ link identifies journal definitions that define the source and target journals. communication (transfer) information. OptiConnect. A data group definition identifies to MIMIX the characteristics of how replication occurs between two systems. whether that direction can be switched. MIMIX uses the journal definition to manage the journal receiver environment used by the replication process. Newly created data groups use remote journaling as the default configuration. • Data group file entry This type of data group entry identifies the location of a database file to be replicated and what its name and location will be on the target system. and Transmission Control Protocol/Internet Protocol (TCP/IP) protocols. Any definitions you create can be used by both user journal and system journal replication processes. you can specify an extended object attribute such as PF-DTA or DSPF. • • A single data group can contain any combination of these types of data group entries. You can select IFS objects for replication by specific or generic path name. However. only the entries associated with the product to which you are licensed will be processed for replication. IFS objects include directories. Data group data area entries This type of entry allows you to define a data area for replication by the data area polling process. Data group DLO entries This type of entry allows you to identify document library objects (DLOs) for replication.. and symbolic links. and owner. • Data group IFS entries This type of entry allows you to identify integrated file system (IFS) objects for replication. and object type. Optionally. the preferred way to replicate data areas is to use advanced journaling. To select DLOs for replication you select individual DLOs by specific or generic folder and DLO name. you select individual objects or groups of objects by generic or specific object and library name. If your license is for only one of the MIMIX products rather than for MIMIX ha1 or MIMIX ha Lite.types of objects for replication. They are contained in folders (except for first-level folders). for files. DLOs are documents and folders. similar to DOS or UNIX files. 36 . stream files. They reside in directories. unless the receiver size option (RCVSIZOPT) is *MAXOPT3. • When you specify *NONE. MIMIX recognizes remote journals and ignores change management for the remote journals. The conditions specified in these parameters must be met before change management can occur. The shipped default value *TIMESIZE results in MIMIX changing the journal receiver by both threshold size and time of day.Journal receiver management Journal receiver management Parameters in journal definition commands determine how change management and delete management are performed on the journal receivers used by the replication process. MIMIX does not handle changing the journal receivers.The Receiver change management (CHGMGT) parameter controls how the journal receivers are changed. respectively. Note: The value *TIME can be specified with *SIZE or *SYSTEM to achieve the same results as *TIMESIZE or *TIMESYS. You must ensure that the system or another application performs change management to prevent the journal receivers from overflowing.The Receiver delete management (DLTMGT) parameter controls how the journal receivers used for replication are deleted. consider the following: • When you specify *TIMESYS. It is strongly 37 . the sequence number will not be reset and a new journal receiver will not be attached unless the sequence number exceeds the sequence number threshold. Change management . When you allow the system to perform change management (*SYSTEM) and the attached journal receiver reaches its threshold. the system manages the receiver by size and during IPLs and MIMIX manages changing the receiver at a specified time. the system detaches the journal receiver and creates and attaches a new journal receiver. The remote journal receiver is changed automatically by the IBM i remote journal function when the receiver on the source system is changed. see “Tips for journal definition parameters” on page 179. Any change management values you specify for the target journal definition are ignored. Additional parameters in the journal definition control the size at which to change (THRESHOLD). You can also customize how MIMIX performs journal receiver change management through the use of exit programs. For more information. When the RCVSIZOPT is *MAXOPT3. Delete management . the system performs a CHGJRN command to create and attach a new journal receiver and to reset the journal sequence number of journals that are not needed for commitment control recovery for that IPL or vary on. During an initial program load (IPL) or the vary on of an independent ASP. the time of day to change (TIME). • In a remote journaling configuration. see “Working with journal receiver management user exit points” on page 498. You can specify in the source journal definition whether to have receiver change management performed by the system or by MIMIX. If you do not use the default value *TIMESIZE for CHGMGT. Shipped default values allow MIMIX to perform change management and delete management. For additional information. and when to reset the receiver sequence number (RESETTHLD2 or RESETTHLD). If you choose MIMIX ha1. and how long to keep detached journal receivers (KEEPJRNRCV). perform delete management only from MIMIX replicate1. For example. For more information. a target journal receiver cannot be deleted until it is processed by the database reader (DBRDR) process and it meets the other criteria defined in the journal definition. Delete management of the source and target receivers occur independently from each other. The IBM i remote journal function does not allow a receiver to be deleted until it is replicated from the local journal (source) to the remote journal (target). how many detached journal receivers to keep (KEEPRCVCNT). see “Working with journal receiver management user exit points” on page 498. If both products scrape from the same journal. If you choose to manage journal receivers yourself. the journal manager for each installation can delete the journal regardless of whether the other installations are finished with it. see change management for available options that can be specified in the journal definition. It is highly recommended that you configure the journal definitions to have MIMIX perform journal delete management. This will prevent MIMIX ha1 deleting receivers before MIMIX replicate1 is finished with them. the system may delete a journal receiver before MIMIX has completed its use. you need to choose only one product to perform change management activities for a specific journal. This can occur in the following scenarios: • When performing a clear pending start of the data group while also specifying a 38 . the journal receivers are only deleted after MIMIX is finished with them and all other criteria specified on the journal definition are met. Processing from an earlier journal receiver It is possible to have a situation where the operating system attempts to retransmit journal receivers that already exist on the target system. MIMIX operations can be affected if you allow the system to handle delete management. Although both MIMIX replicate1 and MIMIX ha1 support receiver change management. the remote journal function ends with an error and transmission of entries to the target system stops. including system managed receivers. Note: If more than one MIMIX installation uses the same journal. When this situation occurs. If you choose MIMIX replicate1. Interaction with other products that manage receivers If you run MIMIX replicate1 on the same system as MIMIX ha1 (or MIMIX ha Lite) there may be considerations for journal receiver management. your MIMIX ha1 journal definition should specify CHGMGT(*NONE). The criteria includes how long to retain unsaved journal receivers (KEEPUNSAV). When MIMIX manages deletion. you need to ensure that journals are not removed before MIMIX has finished processing them.recommended that you use the value *YES to allow MIMIX to perform delete management. The journal definition within MIMIX ha1 should specify DLTMGT(*NO). If you have this scenario. When MIMIX performs delete management. you need to use the journal receiver delete management exit points to control deleting the journal receiver. 39 . the IBM i remote journal function begins transmitting journal entries from the just changed journal receiver. the journal receiver is changed before the data group is started. these additional journal receivers can become stranded on the backup system following a switch. Replication ended while processing journal entries in target receiver 2. the remote journal function attempts to retransmit source journal receivers 1 through 4. Figure 2. If the data group is started (STRDG) with a starting journal sequence number for an entry that is in journal receiver 1. deleting target receiver 2 would prevent or resolve the problem. For example. As part of the switch processing. To remove these stranded journal receivers. Target journal receiver 1 is deleted through the configured delete management options. However. Example of processing from an earlier journal receiver. refer to Figure 2. This journaling on the target system makes it easier and faster to start replication from the backup system following a switch.Journal receiver management sequence number that is earlier in the journal stream than the last processed sequence number • When starting a data group while specifying a database journal receiver that is earlier in the receiver chain than the last processed receiver. You can prevent this situation before starting that data group if you delete any target journal receivers following the receiver that will be used as the starting point. In this example. Source Journal Receivers Target Journal Receivers 4 3 2 1 1 2 Considerations when journaling on target The default behavior for MIMIX is to have journaling enabled on the target systems for the target files. beginning with receiver 1. Because the backup system is now temporarily acting as the source system. an error occurs and the transmission to the target system ends. recovery is simply to remove the target journal receivers and let remote journaling resend them. In a remote journaling environment. MIMIX writes the journal entry to a separate journal on the target system. the remote journal function interprets any earlier receivers as unprocessed source journal receivers and prevents them from being deleted. receiver 2 already exists on the target system. After a transaction is applied to the target system. When the operating system encounters receiver 2. If you encounter the problem. When starting a data group after a switch. you need to use the IBM command DLTJRNRCV with *IGNTGTRCV specified as the value of the DLTOPT parameter. Newer MIMIX functions may only be available through this user interface. The files and objects must be initially synchronized between the systems participating in replication. all replication jobs for all data groups. the STRMMX and ENDMMX commands are preferred because they ensure that processes are started or ended in the appropriate order. While other commands are available to perform these functions individually. Journaling must be active for the database files and objects configured for user journal replication. as well as the master monitor and jobs that are associated with it. At least one communications link must be in place for each pair of systems between which replication will occur. day-to-day operations for MIMIX can be performed from either the web-based MIMIX Availability Manager or from a 5250 emulator. the following requirements must be met through the installation and configuration processes: • • • • • • MIMIX software must be installed on each system in the MIMIX installation. These commands provide the flexibility to start or end selected processes and apply sessions associated with a data group. These commands include MIMIX services and manager jobs. the object auditing environment must be set up. and data groups to monitor. see the Using MIMIX book. MIMIX Availability Manager is easy to use and preferable for daily operations. In the following paragraphs. installations. The Start MIMIX (STRMMX) and End MIMIX (ENDMMX) commands provide the ability to start and end all elements of a MIMIX environment. 40 . only 5250 command names are used for simplicity. Once MIMIX is configured and files and objects are synchronized. For more information about both sets of commands. The corresponding windows have the same names as the commands to which they pass information. For objects to be replicated from the system journal. Support for starting and ending replication MIMIX Availability Manager and the 5250 emulator can be used to start and end replication.Operational overview Before replication can begin. The Start Data Group (STRDG) and End Data Group (ENDDG) commands operate at the data group level to control replication processes. Through preferences. individuals have the ability to customize what systems. which can be helpful for balancing workload or resolving problems. The MIMIX operating environment must be configured and be available on each system. By using function keys. and Services. refer to the Using MIMIX book. The initial view summarizes replication errors and the status of user journal (database) and system journal (object) processes for both source and target systems. Audits and Notification. Detailed status: When checking detailed status for a data group. Problems that cannot be resolved are reported like any other replication error. Color and informational messages identify the most severe problem present in an area and identify the action to take to start problem isolation. Many options are available for taking action at the data group level and for drilling into detailed status information. Error recovery during replication: MIMIX AutoGuard also provides the ability to have MIMIX check for and correct common problems during user journal and system journal replication that would otherwise cause a replication error. the MIMIX Availability Status display reports the prioritized status of a single installation. Support for automatically detecting and resolving problems The functions provided by MIMIX AutoGuard are fully integrated into MIMIX user interfaces. Status icons and flyover text start the problem resolution process by guiding you to the appropriate action for the most severe problem present.Operational overview Support for checking installation status Only MIMIX Availability Manager provides the ability to monitor multiple installations of MIMIX at once from a single interface. For detailed information about MIMIX AutoGuard. Status from each installation ‘bubbles up’ to the Enterprise View. Audits can also be invoked manually and automatic recovery can be optionally disabled. you can display additional detailed views of only database or only object status. The Data Group Status window in MIMIX Availability Manager and the Work with Data Groups (WRKDG) display provide status of replication jobs and indication of any replication errors for the data groups within an installation. From a 5250 emulator. The Work with Audits display (WRKAUD) provides a summary view for audit status and a compliance view for adherence to auditing best practices. These audits check for common problems and automatically correct any detected problems within a data groups. MIMIX Availability Manager provides significant benefits over 5250 emulator commands. Automatic recovery can be optionally disabled. where you can quickly see whether a problem exists on the systems you are monitoring. Status from the installation is reported in three areas: Replication. Similar windows exist in MIMIX Availability Manager. the command DSPDGSTS (option 8 from the Work with Data groups display) accesses the Data Group Status display. 41 . Audits: MIMIX ships with a set of audits and associated audit monitors that are automatically scheduled to run daily. In the 5250 emulator. Support for working with data groups Data groups are central to performing day-to-day operations. Status icons or highlighted text indicates whether problems exist. 51=IFS trk entries not active. It includes information about the replication of user journal transactions. MIMIX will direct you to the correct system when this is required. including journaled files. such as on the source system where the journal resides or on the target system if the problem is related to the apply process.Status This window identifies all of the replication jobs and services jobs needed by the data group and provides their status. and spooled file activity. 13=Objects in error. performance.User Journal This window represents replication performed by user journal replication processes. and 53=Obj trk entries not active. performance. and recent activity. Options on the Work with Data Group Activity display allow you to see messages associated with an entry. Similar information is available from database views of the Data Group Status display. Data Group Details . Similar information is available from object views of the Data Group Status display. and data queues. Support for resolving problems MIMIX includes functions that can assist you in resolving a variety of problems. Object. IFS objects. or Object Tracking. You can often take action to resolve problems directly from these detailed status windows. and recent activity. IFS. Data Group Details . Depending on the type of problem. Data Group Details . MIMIX Availability Manager provides similar capabilities to those of WRKDGACT from the following windows: Data Group Details . This window displays only one type of problem at a time. Similar information is available from the merged view of the Data Group Status display. Similar information is available in the 5250 emulator when you use the following options from the Work with Data Groups display: 12=Files not active. including journal progress. data areas. DLO. Data Group Details - 42 . synchronize the entry between systems. including journal progress.Activity This window summaries activity for the selected data group that is experiencing replication problems.System Journal This window represents replication performed by system journal replication processes. You can also see an error view that identifies the reason why the object is in error. Action lists include only the appropriate choices for the problem and only those available from the system you are viewing. MIMIX Availability Manager provides superior assistance for problem resolution.System Journal.When you choose to display detailed status for a data group from MIMIX Availability Manager. and remove a failed entry with or without related entries. some problem resolution tasks may need to be performed from the system where the problem occurs. Object activity: The Work with Data Group Activity (WRKDGACT) command allows you to track system journal replication activity associated with a data group. Problems are grouped by type of activity: File. the highest priority problem that exists for the data group determines which of several possible views of the Data Group Details window will be displayed. which can help you determine the cause of an error. Data Group Details . IFS Tracking. based on the activity type selected from the navigation bar. You can see the object. MIMIX will attempt a third retry cycle using the settings from the Number of third delay/retries (OBJRTY) and Third retry interval (min. system journal replication processes may encounter object requests that cannot be processed due to an error. thereby delaying the retry attempt until a time when it is more likely to succeed. Data Group Details Activity. it is not always possible to determine this until the failed system has been recovered. Files on hold: When the database apply process detects a data synchronization problem. However. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the system journal. Even if the 43 . When the Automatic object recovery policy is enabled.User Journal. From the Work with Data Group Activity or Work with Data Group Activity Entries displays. you can view and work with the entry for which the error was detected and work with all other entries following the entry in error. MIMIX Availability Manager supports manually retrying activities from appropriate windows by providing Retry as an available action in the Action List. and File Activity Details. and Object Activity Details. Some errors may require user intervention. you can see the status of an entry and use a number of options to assist in resolving the error. You can manually request that MIMIX retry processing for a data group activity entry that has a status of *FAILED. it places the file (individual member) on “error hold” and logs an error. This option calls the Retry Data Group Activity Entries (RTYDGACTE) command. you can use the retry option to resubmit individual failed entries or all of the entries for an object. File entries are in held status when an error is preventing them from being applied to the target system. An alternative view shows the database error code and journal code. Often the error is due to a transient condition. These entries can be viewed using the Work with Data Group Activity (WRKDGACT) command.Operational overview Activity. The WRKDGFEHLD command allows you to work with file entries that are in a held status.) (OBJRTYITV) policies. From the Work with Data Group Activity display. requests may still result in a Failed status. you can also specify a time at which to start the request. such as a never-ending process that holds a lock on the object. failed entries can be resubmitted and they will succeed. Default filtering options in MIMIX Availability Manager only display problems with replicating objects from the user journal. When this option is selected from the target system. Available options include access to the Work with DG Files on Hold (WRKDGFEHLD) command. when the system that is the source of replicated data fails. You need to analyze the cause of the problem in order to determine how to correct and release the file and ensure that the problem does not occur again. Although MIMIX will attempt some automatic retries. such as when an object is in use by another process at the time the object retrieve process attempts to gather the object data. In many cases. it is possible that some of the generated journal entries may not have been transmitted to or received by the target system. MIMIX Availability Manager provides similar capabilities to those of WRKDGFEHLD from the following windows: Data Group Details . From the Work with DG File Entries display. Journal analysis: With user journal replication. An option on the Work with Data Groups display provides quick access to the subset of file entries that are in error for a data group. These policies can be set for the installation or adjusted for a specific data group. Failed requests: During normal processing. In an unplanned switch. For additional information about switching. In either case. Once the source system is available again. see the Using MIMIX book. When you perform a planned switch. The SWTDG command supports both planned and unplanned switches. You can only perform journal analysis on the system where a journal resides. You may need to take the system offline to perform maintenance on its hardware or software. In a planned switch. you can use the journal analysis function to help determine what journal entries may have been missed and to which files the data belongs. you perform a switch using the MIMIX Switch Assistant or by using commands to call a customized implementation of MIMIX Model Switch Framework. the production system (the source of replication) is available. damage to a disk unit or to the journal itself may prevent an accurate analysis of any missed data. you must run the SWTDG command from the target system. “Journal definition considerations” on page 184 contains examples of how to set up these journal definitions. You can specify whether to end the RJ link during a switch. the Switch Data Group (SWTDG) command is called programmatically to change the direction in which replication occurs between systems defined to a data group. if there is no damage to the disk unit or journal and its associated journal receivers. For additional information about MIMIX Model Switch Framework. To enable a switchable data group to function properly for default user journal replication processes.failed system is recovered. Most likely the production system is no longer available. In a planned switch. The next time you start the data group. Support for switching a data group Typically. you should be aware of how MIMIX supports unconfirmed entries and the state of the RJ link following a switch. When you perform an unplanned switch. For more information. 44 . The next time you start the data group. see the Using MIMIX Monitor book. see “Support for unconfirmed entries during a switch” on page 67 and “RJ link considerations when switching” on page 67. Default behavior for a planned switch is to leave the RJ link running. Once you have a properly configured data group that supports switching. data group processing is ended on both the source and target systems. it will be set to replicate in the opposite direction. you are purposely changing the direction of replication for any of a variety of reasons. Default behavior during an unplanned switch is to end the RJ link. four journal definitions (two RJ links) are required. it will be set to replicate in the opposite direction. Data group processing is ended on the target system. or you may be testing your disaster recovery plan. data areas and data queues that are replicated through the user journal will not be detected by journal analysis. Missed transactions for IFS objects. you are changing the direction of replication as a response to a problem. However. placing messages into the message log file secures a second level of information concerning MIMIX operations. The system manager deletes entries from the message log file based on the value of the Keep system history parameter in the system definition. These messages are sent to both the primary and secondary message queues that are specified for the system definition. 45 . On a network system. The MIMIX message log provides a powerful tool for problem determination. MIMIX automatically performs cleanup of the message log on a regular basis. When messages are issued. if you process an unusually high volume of replicated data. the message log provides robust subset and filter capabilities. Maintaining a message log file allows you to keep a record of messages issued by MIMIX as an audit trail.Operational overview Support for working with messages MIMIX sends a variety of system message based on the status of MIMIX jobs and processes. and a powerful debug tool. In addition. The system manager is responsible for collecting messages from all network systems. the ability to locate and display related job logs. you may want to also periodically delete unnecessary message log entries since the file grows in size depending on the number of messages issued in a day. they are initially sent to the specified primary and secondary message queues. The message log on the management system contains messages from the management system and each network system defined within the installation. the message log contains only those messages generated by MIMIX activity on that system. In addition to these message queues. You can view messages generated by MIMIX from either the Message Log window or from the Work with Message Log (WRKMSGLOG) display. In the event that these message queues are erased. message entries are recorded in a MIMIX message log file. “Cooperative processing introduction” on page 49 describes three variations available for performing replication activities using a coordinated effort between user journal processing and system journal processing. MIMIX uses a series of processes. • Configuration choices determine the degree of cooperative processing used between the system journal and user journal replication paths when replicating files. • The user journal replication path captures changes to critical files and objects configured for replication through the user journal using the IBM i remote journaling function. The system journal replication path handles replication of critical system objects (such as user profiles or spooled files). data areas. and data queues. “User journal replication of IFS objects. and document library object (DLOs) using the IBM i system journal. data queues” on page 69 describes a technique which allows replication of changed data for certain object types through the user journal.Replication process overview CHAPTER 2 Replication process overview In general terms. represent the critical path on which data to be replicated moves from its origin to its destination. In previous versions MIMIX Object Replicator provided this function. data areas. In previous versions. a replication path is a series of processes that. This chapter describes the replication paths and the processes used in each. The topics in this chapter include: • “Replication job and supporting job names” on page 47 describes the replication paths for database and object information. “System journal replication” on page 51 describes the system journal replication path which is designed to handle the object-related availability needs of your system through system journal processing. MIMIX uses two replication paths to accommodate differences in how replication occurs for databases and objects. MIMIX DB2 Replicator provided this function. “Lesser-used processes for user journal replication” on page 73 describes two lesser used replication processes. Included is a table which identifies the replication job names for each of the processes that make up the replication path. IFS objects. integrated file system (IFS) objects. “User journal replication” on page 58 describes remote journaling and the benefits of using remote journaling with MIMIX. Within each replication path. • • • • • 46 . together. MIMIX source-send processing for database replication and the data area poller process. These paths operate with configurable levels of cooperation or can operate independently. The replication path for object information includes the object send process. 3 1.Replication job and supporting job names Replication job and supporting job names The replication path for database information includes the IBM i remote journal function. For more information. or renames an object. 3 1. MIMIX automatically restarts the jobs in Table 3 to maintain the MIMIX environment. 3 ---3 1. the replication path also includes the object retrieve. and one or more database apply processes. 3 3 3. When a data retrieval request is replicated. 3 3 1. the object receive process. see “Configuring restart times for MIMIX jobs” on page 285. 2 Abbreviation CNRRCV CNRSND DAPOLL DBAPY DBRCV DBRDR DBSND JRNMGR MXCOMMD MXOBJSELPR OBJAPY OBJRTV OBJSND OBJRCV STSSND SYSMGR 47 .). Table 3. then the processes include the database send process. A data retrieval request is an operation that creates or changes the content of an object. or that changes the authority or ownership of an object. The default is to restart these MIMIX jobs daily at midnight (12:00 a. the database receive process. and the object apply process. 3 1. If MIMIX source-send processes are used instead of remote journaling. A self-contained request is an operation that deletes. 3 1. and one or more database apply processes. Except as noted. 4 1. and container receive processes. moves. MIMIX processes and their corresponding job names Description Container receive process Container send process Data area polling Database apply process Database receive process Database reader Database send process Journal manager MIMIX Communications Daemon Object selection process Object apply process Object retrieve process Object send process Object receive process Status send System manager Runs on Target Source Source Target Target Target Source System System System Target Source Source Target Target System Job name sdn_CNRRCV sdn_CNRSND sdn_DAPOLL sdn_DBAPYs sdn_DBRCV sdn_DBRDR sdn_DBSND JRNMGR MXCOMMD MXOBJSELPR sdn_OBJAPY sdn_OBJRTV sdn_OBJSND sdn_OBJRCV sdn_STSSND SM******** Notes 1. 3 1. the MIMIX database reader process. container send.m. you can configure a different time to restart the jobs. Table 3 identifies the job names for each of the processes that make up the replication path. If this time conflicts with scheduled workloads. 3. The ******** in the job name format indicates the name of the system definition. 5 Abbreviation SYSMGRRCV STSRCV TEUPD Note: 1. Send and receive processes depend on communication. The alias is defined on the service table entry. 4. (Continued) MIMIX processes and their corresponding job names Description System manager receive process Status receive Tracking entry update process Runs on Network Source Source or Target Job name SR******** sdn_STSRCV sdn_TEUPD Notes 1. 2. 3 3. The characters sdn in a job name indicate the short data group name. The system manager runs on both source and target systems. OptiConnect job names start with APIA* in the QSOC subsystem.Table 3. 48 . The SNA job name is derived from the remote location name. 2 1. TCP/IP uses a job name port number or alias as the job name. The character s is the apply session letter. depending on the transfer protocol. The job is used only for replication with advanced journaling and is started only when needed. The job name varies. 5. cooperative processing enables MIMIX to perform replication in the most efficient way by evaluating the object type and the MIMIX configuration to determine whether to use the system journal replication processes. user journal replication processes. data areas.Cooperative processing introduction Cooperative processing introduction Cooperative processing is when the MIMIX user journal processes and system journal processes work in a coordinated effort to perform replication activities for certain object types. any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. In all variations of cooperative processing. All other types of files are processed using system journal replication. and high availability by ensuring the complete replication of newly created or redefined files and objects. or data queues that can be journaled are not automatically configured for advanced journaling. the system journal is used to replicate the following operations: • • • The creation of new objects that do not deposit an entry in a user journal when they are created. any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it. Restores of objects on the source system Move and rename operates from a non-replicated library or path into a library or path that is configured for replication. When a data group definition meets the requirements for MIMIX Dynamic Apply. When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements. MIMIX supports the following variations of cooperative processing for these object types: • • • MIMIX Dynamic Apply (files) Legacy cooperative processing (files) Advanced journaling (IFS objects. Object types that can be journaled to a user journal are eligible to be processed cooperatively when properly configured to MIMIX. MIMIX Dynamic Apply is the most efficient way to perform cooperative processing of logical and physical files. or a combination of both. MIMIX Dynamic Apply Most environments can take advantage of cooperatively processed operations for journaled *FILE objects that are journaled primarily through a user (database) journal. IFS objects. data management efficiency. MIMIX Dynamic Apply intelligently handles files with 49 . data areas. Cooperative processing also provides a greater level of data protection. and data queues). These object types must be manually configured to use advanced journaling. When configured. by default. data areas. and data queues” on page 79. while all other transactions are replicated through system journal processes. and restore processes necessary for system replication. record and member operations of *FILE objects are replicated through user journal processes. data areas. Another benefit of MIMIX Dynamic Apply is more efficient hold log processing by enabling multiple files to be processed through a hold log instead of just one file at a time. It is recommended to use MIMIX Dynamic Apply for cooperative processing. This is more efficient than replicating an entire object through the system journal each time changes occur. or data queues that are configured for cooperative processing. This configuration requires data group object entries and data group file entries. For more information. send. For more information. Existing data groups configured to use legacy cooperative processing can be converted to use MIMIX Dynamic Apply. and renames. and data queues with database journal entries. data areas. data areas. Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). New data groups created with the shipped default configuration values are configured to use MIMIX Dynamic Apply. even for equal amounts of data. Legacy cooperative processing In legacy cooperative processing. see “User journal replication of IFS objects. see “Requirements and limitations of legacy cooperative processing” on page 102. When these objects are configured for cooperative processing. These terms are the same. and data queues” used interchangeably with the term advanced journaling. see “Identifying logical and physical files for replication” on page 96 and “Requirements and limitations of MIMIX Dynamic Apply” on page 101. deletes. In addition. moves. 50 . Advanced journaling The term advanced journaling refers to journaled IFS objects. replication of changed bytes of the journaled objects’ data occurs through the user journal.relationships by assigning them to the same or appropriate apply sessions. processing time for these object types may be reduced. data queues” on page 69 and “Planning for journaled IFS objects. Such a configuration also allows for the serialization of updates to IFS objects. Data groups that existed prior to upgrading to MIMIX version 5 are typically configured with legacy cooperative processing which requires data group object entries and data group file entries. Frequently you will see the phrase “user journal replication of IFS objects. data areas. For more information. as user journal replication eliminates the separate save. It is also much better at maintaining data integrity of replicated objects which previously needed legacy cooperative processing in order to replicate some operations such as creates. Object receive process: receives control information and waits for notification that additional source system processing. programs. or system journal). is complete before passing the control information to the object apply process. If any data group is configured to replicated spooled files. *OBJMGT. when you use MIMIX commands to build the journaling environment. The system journal replication path in MIMIX uses the following processes: • Object send process: alternates between identifying objects to be replicated and transmitting control information about objects ready for replication to the target system. MIMIX also sets *SPLFDTA and *PRTDTA. MIMIX uses the journal entries generated by the operating system’s object auditing function to identify the changes to objects on production systems and replicates the changes to backup systems. MIMIX is not aware of the operation and cannot replicate it. or renamed into the MIMIX name space. When a data group is started. and data group IFS entries. You identify the critical system objects that you want to replicate. moved. and *SAVRST. • • 51 . data group DLO entries. MIMIX sets the values *CREATE. MIMIX creates the journal and correctly sets system values related to auditing. While in the MIMIX name space. MIMIX checks for values *SECURITY. MIMIX checks the settings of the following system values. If you are not already using the system’s security audit journal (QAUDJRN. if any. such as user profiles. each process used in the replication path waits for its predetermined event to occur then begins its activity.System journal replication System journal replication The system journal replication path is designed to handle the object-related availability needs of your system. along with the object audit value of each object. • These system value settings. obtains it and places it in a holding area. control what journal entries are created in the system journal (QAUDJRN) for an object. MIMIX sets the values *OBJAUD and *AUDLVL. QAUDCTL (Auditing control) system value. The processes are interdependent and run concurrently. and *SECVLDL and will set them only if the value *SECURITY is not already set. Replication through the system journal is event-driven. restored. *SECRUN. making changes as necessary: • QAUDLVL (Security auditing level) system value. *DELETE. *SECCFG. An object is replicated when it is created. Object retrieve process: if any additional information is needed for replication. and DLOs. changes to the object or to the authority settings of the object are also replicated. This process is also used when additional processing is required on the source system prior to transmission to the target system. If an operation on an object is not represented by an entry in the system journal. The system objects you want to replicate are defined to a data group through data group object entries. The term name space refers to this collection of objects that are identified for replication by MIMIX using the system journal replication processes. Status send process: notifies the source system of the status of the replication. An activity entry includes a copy of the journal entry and any related information associated with a replication operation for an object. The structures in the work log consist of log spaces. There are two categories of activity entries: those that are self-contained and those that require the retrieval of additional information. it performs the following actions: • • • Sets the status of the entry to PA (pending apply) Adds the “sent” date and time to the activity entry Writes the activity entry to the log space and adds a record to the distribution status file 52 . including the status of the entry. User interaction with activity entries is through the Work with Data Group Activity display and the Work with DG Activity Entries display. a corresponding journal entry is created in the security audit journal. passes control information back to the object send process. the object send process creates an activity entry in the work log. the object send process reads journal entries and determines if they represent operations to objects that are within the name space. Processing self-contained activity entries For a self-contained activity entry. MIMIX uses a collection of structures and customized functions for controlling these structures during replication. the copied journal entry contains all of the information required to replicate the object. work lists (implemented as user queues). and distribution status file. After the object send process determines that an entry is to be replicated. “Processing data-retrieval activity entries” on page 53 describes the object replication scenario in which additional data must be retrieved from the source system and sent to the target system. Creation of an activity entry includes adding the entry to the log space and adding a record to the distribution status file. such as it is being accessed or changed. Object apply process: replicates objects according to the control information and any required additional information that is retrieved from the holding area. For each journal entry for an object within the name space. Container receive process: receives any additional information and places it into a holding area on the target system. “Processing self-contained activity entries” on page 52 describes the simplest object replication scenario.• • • • • Container send process: transmits any additional information from a holding area to the target system and notifies the control process of that action. Collectively the customized functions and structures are referred to as the work log. When a data group is started. Status receive process: updates the status on the source system and. Examples of journal entries include Change Authority (T-CA). if necessary. MIMIX uses the security audit journal to monitor for activity on objects within the name space. Object Move or Rename (T-OM). As journal entries are added to the journal receiver on the source system. When activity occurs on the object. and Object Delete (T-DO). The status receive process updates the activity entry in the work log and the distribution status file. Processing data-retrieval activity entries In a data retrieval activity entry. changes the status of the entry to CP (completed processing). adds a record to the distribution status file. writes the activity entry to the log space. Now each system has a copy of the activity entry. The actual content of the change is not recorded in the journal entry. must be retrieved and transmitted to the target system. 53 . Now each system has a copy of the activity entry. The object receive process adds the “received” date and time to the activity entry. The object apply process adds the “applied” date and time to the activity entry.System journal replication • Transmits the activity entry to a corresponding object receive process job on the target system. The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding status receive process on the source system. or both. these objects in the data library are known as containers. it performs the following actions: • • • • • Sets the status of the entry to PR (pending retrieve) Adds the “sent” date and time to the activity entry Writes the activity entry to the log space and adds a record to the distribution status file Transmits the activity entry to a corresponding object receive process on the target system. writes the activity entry to the log space. The object receive process adds the “received” date and time to the activity entry. Save commands store the object data in a save file (*SAVF) in the data library. To properly replicate the object. After the object send process determines that an entry is to be replicated and that additional processing or information on the source system is required. The object receive process waits until the source system processing is complete before it adds the activity entry to the object apply work list. and places the activity entry on the object apply work list. Collectively. The copied journal entry indicates that changes to an object affect the attributes or data of the object. additional data must be gathered from the object on the source system in order to replicate the operation. APIs store the data in one or more user spaces (*USRSPC) in a data library associated with the MIMIX installation. MIMIX may retrieve this data by using APIs or by using the appropriate save command for the object type. and adds a record to the distribution status file. The next available object apply process job for the data group retrieves the activity entry from the object apply work list and replicates the operation represented by the entry. attributes. and adds the entry to the status send work list. its content. Adds the entry to the object retrieve work list on the source system. the object send process reads the object send work list. such as if an updated container is needed on the target system. then adds the “container sent” date and time to the activity entry. The status receive process updates the activity entry in the log space and the distribution status file. If necessary. which may be to send the entry to the target system. The object apply process adds the “applied” date and time to the activity entry. The object identified by the activity entry is packaged into a container in the data library. 54 . If the activity entry requires further processing. The container receive process places the container in a data library on the target system. its status is updated. The container send job transmits the container to a corresponding job of the container receive process on the target system. changes the status of the entry to CP (completed processing). From there the object send job takes the appropriate action for the activity. The next available job for the container send process for the data group retrieves the activity entry from the container send work list and retrieves the container for the packaged object from the data library.” The activity entry is added to the object send work list. cooperative processing activities.Concurrently. and adds the entry to the object send work list. the activity entry is routed to the container send work list. are performed. This may be one step in retrieving the object or it may be the primary function required of the retrieve process. The object retrieve process may perform some or all of the following steps: • • • Retrieve the extended attribute of the object. and a “retrieved” date and time is added to the activity entry. When the object send process finds an activity entry in the object send work list. On the source system the next available object retrieve process for the data group retrieves the activity entry from the object retrieve work list and processes the referenced object. The activity entry is transmitted to the target system. • The container send and receive processes are only used when an activity entry requires information in addition to what is contained within the journal entry. and adds the entry to the status send work list. the object send process performs one or more of the following additional steps on the entry: • • If an object retrieve job packaged the object. add the entry to the container send work list. the status receive job adds the entry to the object send work list. In addition to retrieving additional information for the activity entry. locates the container for the object in the data library. The object retrieve process adds the “retrieved” date and time to the activity entry and changes the status of the entry to “pending send. The next available object apply process job for the data group retrieves the activity entry from the object apply work list. and replicates the operation represented by the entry. The status send process retrieves the activity entry from the status send work list and transmits the updated entry to a corresponding job for status receive process on the source system. such as adding or removing a data group file entry. changes the status of the activity entry to PA (pending apply). The container send process waits for confirmation from the container receive job. or both. additional processing may be required on the source system. Regular monitoring and timely responses to error conditions significantly reduce the amount of time and effort required in the event that you need to switch a data group. and object apply processes all consist of one or more asynchronous jobs. During periods of peak activity. and the associated failed entries. you need to monitor the status of the replication processes and respond to any error conditions. MIMIX provides an indication of high level status of the processes used in object replication and error conditions. Managing object auditing 55 . You can also subset directly to the activity entries in error from the Work with Data Groups display. up to the maximum number. These jobs stay active as long as the data group is active. and determine the appropriate action. By doing this. the activity entry may go to a failed state. You can access detailed status information through the Data Group Status window in MIMIX Availability Manager or the MIMIX Availability Status display in a 5250 emulator. You may retry or delete one or all of the failed entries for an object. The minimum number indicates how many permanent jobs should be started for the process. You can check the progress of activity entries and take corrective action through the Work with Data Group Activity display and the Work with DG Activity Entries display. Before any new data group entries can be replicated. the temporary jobs end after a period of inactivity elapses. you need to add data group entries for them. additional temporary jobs. This load leveling feature allows system journal replication processes to react automatically to periodic heavy workloads. Objects with at least one failed entry outstanding are considered to be “in error. The system manager removes old activity entries from the work log on each system after the time specified in the system definition passes. if more requests are backlogged than are specified in the threshold. If you have new objects to replicate that are not within the MIMIX name space. you must end and restart the system journal replication processes in order for the changes to take effect. but some failures require manual intervention.” You should periodically review the objects in error. When system activity returns to a reduced level. You can specify the minimum and maximum number of asynchronous jobs you want to allow MIMIX to run for each process and a threshold for activating additional jobs. MIMIX attempts to rectify many failures automatically. the replication process stays current with production system activity. Containers in the data libraries are deleted after the time specified in the Keep MIMIX data (days) parameter (KEEPMMXDTA). The Keep data group history (days) parameter (KEEPDGHST) indicates how long the activity entries remain on the system. may also be started. container send and receive. When an operation cannot complete on either the source or target system (such as when the object is in use by another process and cannot be accessed). Tracking object replication After you start a data group. You can also manually delete activity entries.System journal replication Processes with multiple jobs The object retrieve. To ensure that objects configured for this replication path retain an object auditing value that supports replication. The OBJAUD parameter supports object audit values of *ALL. When the command is invoked.Regardless of how the object auditing evaluation is invoked. Data group entries are processed in order from most generic to most specific. This evaluation process can also be invoked manually for all objects identified for replication by a data group.The system journal replication path within MIMIX relies on entries placed in the system journal by IBM i object auditing functions. During replication . Otherwise. restored. Evaluation processing . MIMIX evaluates and changes the objects’ auditing value when necessary. The SETDGAUD command also supports optionally forcing a change to a configured value that is lower than the existing value through its Force audit value (FORCE) parameter. IFS. If you would rather set the auditing level for replicated objects yourself. When MIMIX determines that an object’s auditing value is lower than the configured value. MIMIX evaluates and may change an object’s auditing value when specific conditions exist during object replication or during processing of a Start Data Group (STRDG) request. moved. Invoking manually . To do this. or library-based). it changes the object to have the higher configured value specified in the data group entry that is the closest match to the object. The SETDGAUD command is used during initial configuration of a data group. DLO. DLO) configured for the system journal replication path. or DLO). MIMIX employs a configuration value that is specified on the Object auditing value (OBJAUD) parameter of data group entries (object. or renamed into the MIMIX name space (the group of objects defined to MIMIX). MIMIX may find that an object is identified by more than one data group entry within the same class of object (IFS. *CHANGE. IFS entries are processed using the unicode character set.MIMIX may change the auditing value during replication when an object is replicated because it was created. or *NONE. IFS. It is important to understand the order of precedence for processing data group entries.MIMIX may change the auditing value while processing a STRDG request if the request specified processes that cause object send (OBJSND) jobs to start and the request occurred after a data group switch or after a configuration change to one or more data group entries (object.The Set Data Group Auditing (SETDGAUD) command provides the ability to manually set the object auditing level of existing objects identified for replication by a data group. MIMIX checks the audit value of existing objects identified for system journal replication. While starting a data group . it is not necessary for normal operations and should only be used under the direction of a trained MIMIX support representative. Shipped command defaults for the STRDG command allow MIMIX to set object auditing if necessary. you can specify *NO for the Set object auditing level (SETAUD) parameter when you start data groups. object entries and DLO entries 56 . Shipped default values on the command cause MIMIX to change the object auditing value of objects to match the configured value when an object’s actual value is lower than the configured value. System journal replication are processed using the EBCDIC character set. The entry that most specifically matches the object is used to process the object. When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry. In the case of an IFS entry with a generic name. all descendents of the IFS object may also have their auditing value changed. if necessary. all of the directories in the object’s directory path are checked and. For more information and examples of setting auditing values with the SETDGAUD command. For example. see “Setting data group auditing values manually” on page 270. MIMIX sets object auditing for all objects identified by the data group’s DLO entries. 57 . When you change a data group entry. but not for its object entries or IFS entries. it is set to the configured auditing value specified in the data group entry that most specifically matches the object. MIMIX updates all objects identified by the same type of data group entry in order to ensure that auditing is set properly for objects identified by multiple entries with different configured auditing values. The first entry (more generic) found that matches the object is used until a more specific match is found. if a new DLO entry is added to a data group. changed to the new auditing value. If the object has a lower audit value. Newly created data groups use remote journaling as the default configuration. The MIMIX Remote Journal support allows MIMIX to take advantage of the cross-journal communications functions provided by the IBM i remote journal function instead of using the internal communications provided by MIMIX.User journal replication MIMIX Remote Journal support enables MIMIX to take advantage of the cross-journal communications capabilities provided by the IBM i remote journal function instead of using internal communications. After the journals and journal receivers are established on both systems. You can find these books online at the IBM eServer iSeries Information Center. If the 58 . “The benefits of remote journal function include: • It lowers the CPU consumption on the source machine by shifting the processing required to receive the journal entries from the source system to the target system. What is remote journaling? Remote journaling is a function of the IBM i that allows you to establish journals and journal receivers on a target system and associate them with specific journals and journal receivers on a source system. the remote journal function can replicate journal entries from the source system to the journals and journal receivers located on the target system. Benefits of using remote journaling with MIMIX MIMIX has internal send and receive processing as part of its architecture. As stated in the AS/400 Remote Journal Function for High Availability and Data Replication redbook. This realtime operation is called the synchronous delivery mode. The remote journal function supports both synchronous and asynchronous modes of operation. This is true when asynchronous delivery is selected. it significantly improves the replication performance of journal entries and allows database images to be sent to the target system in realtime. More information about the benefits and implications of each mode can be found in topic “Overview of IBM processing of remote journals” on page 60. The IBM redbooks AS/400 Remote Journal Function for High Availability and Data Replication (SG24-5189) and Striving for Optimal Journal Performance on DB2 Universal Database for iSeries (SG24-6286) provide an excellent overview of remote journaling in a high availability environment. You should become familiar with the terminology used by the IBM i remote journal function. • It eliminates the need to buffer journal entries to a temporary area before transmitting them from the source machine to the target machine. This translates into less disk writes and greater DASD efficiency on the source system. • Since it is implemented in microcode. The Backup and Recovery and Journal management books are good sources for terminology and for information about considerations you should be aware of when you use remote journaling. the journal entries are guaranteed to be in main storage on the target system prior to control being returned to the application on the source machine. Users who require this type of environment may use multiple installations of MIMIX. implementing apply side journaling in one installation and using remote journaling to replicate the applied transactions to a third system.” Restrictions of MIMIX Remote Journal support The IBM i remote journal function does not allow writing journal entries directly to the target journal receiver. 59 . the resource utilization on the source machine can be reduced. MIMIX user journal replication does not support a cascading environment in which remote journal receivers on the target system are also source journal receivers for a third system. • It allows the journal receiver save and restore operations to be moved to the target system. This restriction severely limits the usefulness of cascading remote journals in a managed availability environment. This way.User journal replication synchronous delivery mode is used. When the source system receives notification of the delivery to the target journal receiver. When the remote journal function is activated and all journal entries from the source are requested. This is known as catchup mode. A local-remote journal pair refers to the relationship between a configured source journal and target journal.Overview of IBM processing of remote journals Several key concepts within the IBM i remote journal function are important to understanding its impact on MIMIX replication. the journal entry is placed in the source journal receiver (2) and the source database is updated (3). Once the existing journal entries are delivered to the target system. Each journal entry is first replicated to the target journal receiver in main memory on the target system (1 in Figure 3). The key point about a local-remote journal pair is that data flows only in one direction within the pair. journal entries that have been written to memory on the target system are considered unconfirmed entries until they have been written to 60 . the system begins sending new entries in continuous mode according to the delivery mode specified when the remote journal function was started. With synchronous delivery. existing journal entries for the specified journal receiver on the source system which have not already been replicated are replicated as quickly as possible. New journal entries can be delivered either synchronously or asynchronously. Synchronous delivery In synchronous delivery mode the target system is updated in real time with journal entries as they are generated by the source applications. from source to target. The source applications do not continue processing until the journal entries are sent to the target journal. 61 . Source System Applications 2 Source Journal Receiver (Local) 3 Production Database 1 Source Journal Message Queue 4 Target Journal Receiver (Remote) Target System Target Journal Message Queue Unconfirmed journal entries are entries replicated to a target system but the state of the I/O to auxiliary storage for the same journal entries on the source system is not known. a minimum of a dedicated 100 megabyte ethernet connection is recommended for synchronous remote journaling. Once the confirmation is received. Confirmed journal entries are entries that have been replicated to the target system and the I/O to auxiliary storage for the same journal entries on the source system is known to have completed. They are held in the data portion of the target journal receiver. There is some performance impact to the application when it is moved from asynchronous mode to synchronous mode for high availability purposes. Unconfirmed entries only pertain to remote journals that are maintained synchronously. the entries are considered confirmed journal entries. Since delivery is synchronous to the application layer.auxiliary storage on the source system and confirmation of this is received on the target system (4). In general. These entries are not processed with other journal entries unless specifically requested or until confirmation of the I/O for the same entries is received from the source system. the most recent copy of the data is on the target system. If the source system becomes unavailable. there are application performance and communications bandwidth considerations. Synchronous mode sequence of activity in the IBM remote journal feature. Confirmation typically is not immediately sent to the target system for performance reasons. This impact can be minimized by ensuring efficient data movement. With synchronous delivery. you can recover using data from the target system. Figure 3. An independent job sends the journal entries from a buffer (C) to the target system journal receiver (D) at some time after control is returned to the source applications that generated the journal entries. Default values used in configuring MIMIX for remote journaling use asynchronous delivery. Asynchronous delivery In asynchronous delivery mode. Performance critical applications frequently use asynchronous delivery. the most recent copy of the data is on the source system. Because the journal entries on the target system may lag behind the source system’s database. Source System Applications A Source Journal Receiver (Local) B Production Database C Source Journal Message Queue Buffer Target System D Target Journal Message Queue Target Journal Receiver (Remote) With asynchronous delivery. in the event of a source system failure. entries may become trapped on the source system. 62 . Asynchronous mode sequence of activity in the IBM remote journal feature. Figure 4. the journal entries are placed in the source journal first (A in Figure 4) and then applied to the source database (B). For more information.MIMIX includes special switch processing for unconfirmed entries to ensure that the most recent transactions are preserved in the event of a source system failure. This delivery mode is most similar to the MIMIX database send and receive processes. see “Support for unconfirmed entries during a switch” on page 67. User journal replication processes Data groups created using default values are configured to use remote journaling support for user journal replication. The database reader process performs the filtering that is identified in the data group definition parameters and file and tracking entry options. The replication path for database information includes the IBM i remote journal function. Once an RJ link is defined and other configuration elements are properly set. The Work with RJ Links display makes it easy to identify the state of the IBM i remote journaling environment defined by the RJ link. The RJ link To simplify tasks associated with remote journaling. MIMIX implements the concept of a remote journal link. the MIMIX database reader process. Primary and secondary transfer definitions for the communications path for use by MIMIX. user journal replication processes will use the IBM i remote journaling environment within its replication path. The database reader process (DBRDR) process reads journal entries from the target journal receiver of a remote journal configuration and places those journal entries that match replication criteria for the data group into a log space. A “target” journal definition that defines a remote journal. Transactions are applied in real-time to generate a duplicate image of the journaled objects being replicated from the source system. The concept of an RJ link is integrated into existing commands. Whether the IBM i remote journal function sends journal entries asynchronously or synchronously. MIMIX uses multiple apply processes in parallel for maximum efficiency. Remote journaling does not allow entries to be filtered from being sent to the remote system. The IBM i remote journal function transfers journal entries to the target system. data groups should only share an RJ link if they are intended to be switched together or if they are non-switchable data groups. Otherwise. A remote journal link (RJ link) is a configuration element that identifies an IBM i remote journaling environment. However. Sharing RJ links among data groups It is possible to configure multiple data groups to use the same RJ link. An RJ link identifies: • • • • A “source” journal definition that identifies the system and journal which are the source of journal entries being replicated from the source system. The database apply process applies the changes stored in the target log space to the target system’s database. All entries deposited into the source journal will be transmitted to the target system. there is additional communications overhead from data groups replicating in opposite directions and the potential for 63 . and one or more database apply processes. The Start Remote Journal Link (STRRJLNK) and the End Remote Journal Link (ENDRJLNK) commands provide this capability. Journal entries that are already queued for transmission are not sent before the target journal is deactivated. *IMMED End option values on the End Remote Journal Link (ENDRJLNK) command. These options on the ENDRJLNK command do not have the same meaning as on the ENDDG command. Both commands include an end option (ENDOPT parameter) to specify whether to end immediately or in a controlled manner. At any time.journal entries for database operations to be routed back to their originating system. 64 . the ENDOPT parameter has the following values: Table 4. Any journal entries that are queued for transmission to the target journal will be transmitted before the IBM i remote journal function is ended. The STRDG and ENDDG commands automatically determine whether the data group uses remote journaling and select the appropriate replication path processes. Differences between ENDDG and ENDRJLNK commands You should be aware of differences between ending data group replication (ENDDG command) and ending only the remote journal link (ENDRJLNK command). as needed. Two MIMIX commands provide the ability to use an RJ link without performing data replication. including the RJ link. When the remote journal function is performing catch-up processing. The End Remote Journal Link (ENDRJLNK) command ends only the RJ link. it may take a significant amount of time to transmit the queued entries before actually ending the target journal. The target journal is deactivated immediately. the journal entries that were queued but not sent are prepared again for transmission to the target journal. *CNTRLD The ENDRJLNK command’s ENDOPT parameter is ignored and an immediate end is preformed when either of the following conditions are true: • • When the remote journal function is running in synchronous mode (DELIVERY(*SYNC)). RJ links within and independently of data groups The RJ link is integrated into commands for starting and ending data group replication (STRDG and ENDDG). the remote journal function may have one or more journal entries prepared for transmission to the target journal. See “Support for unconfirmed entries during a switch” on page 67 and “RJ link considerations when switching” on page 67 for more details. If an asynchronous delivery mode is used over a slow communications line. The next time the remote journal function is started. You will primarily use the End Data Group (ENDDG) command to end replication processes and to optionally end the RJ link when necessary. For ENDRJLNK. Once the problem is resolved. Both the source and target RJ link monitor processes appear on this display. MIMIX Monitor commands can be used to see the status of your RJ link monitors. Users can end the monitors by using the Work with Monitors (WRKMON) command and selecting the End option. RJ link monitors . The source RJ link monitor is named after the source journal definition and the target RJ link monitor is named after the target journal definition. The operating system issues messages to these journal message queues when a failure is detected in IBM i remote journal processing. each over its own remote journal link. Each RJ link monitor uses information provided in the messages to determine which remote journal link is affected and to try to automatically recover that remote journal link. The RJ link monitor for the source does not end once it is started.) There is a limit on the number of times that a link will be recovered in a particular time period. They monitor messages put on the message queues associated with the source and target journals. they will be started when you start a remote journal link. you can start the RJ link monitors again the using the Work with Monitors (WRKMON) command and selecting the Start option.operation The RJ link monitors are automatically started when the master monitor is started. The WRKMON command lists all monitors for a MIMIX installation and displays whether the monitor is active or inactive. The display shows whether or not the monitor processes are active. One source RJ link monitor handles this broadcast. the RJ link monitor status appears as unknown on the Display Data Group Status display.RJ link monitors User journal replication processes monitor the journal message queues of the journals identified by the RJ link. Typically this occurs when there are communications problems. The RJ link monitors are MIMIX message queue monitors. and to automatically recover the link when possible. in a cascade scenario an intermediate system can have both a source RJ link monitor and a target RJ link monitor running on it for the same journal definition. a continually failing link will eventually be marked failed and recovery will end. a single source journal definition can link to multiple target journal definitions. since there is one source RJ monitor per source journal definition communicating via a remote journal link. This intermediate system has the target journal definition for the system that 65 . If MIMIX Monitor is not installed as recommended. RJ link monitors in complex configurations In a broadcast scenario. These monitors provide added value by allowing MIMIX to automatically monitor the state of the remote journal link. You can also view the status of your RJ link monitors on the DSPDGSTS status display (option 8 from the Work with Data Groups display). The monitors are created if they do not already exist. Alternately. to notify the user of problems. If for some reason the monitors are not already started. since more than one remote journal link can use a source monitor. Two RJ link monitors are created automatically. (The state of a remote journal link can be seen by using the Work with RJ Links (WRKRJLNK) command. one on the source system and one on the target system. originated the replication and holds the source journal definition for the next system in the cascade. For more information about configuring for these environments. 66 . see “Data distribution and data management scenarios” on page 327. This reduces the potential for and the degree of manual intervention when an unplanned outage occurs. these repeated entries cause additional resources to be used within MIMIX and in communications. Maintaining this data on the target system is critical to your managed availability solution. the unconfirmed entries are processed before any new journal entries generated by the application are processed. In addition. MIMIX Model Switch Framework considerations . Furthermore. to ensure full data integrity. Sharing an RJ link among multiple data groups is only recommended for the conditions identified in “Sharing RJ links among data groups” on page 63. If journaling is still active on the original production database. RJ link considerations when switching By default. you will see the database apply process jobs run longer than they would under standard switch processing. however. These new journal entries are essentially a repeat of the same operation just performed against the database.Support for unconfirmed entries during a switch The MIMIX Remote Journal support implements synchronous mode processing in a way that reduces data latency in the movement of journal entries from the source to the target system. the unconfirmed entries are routed to the MIMIX database apply process to be applied to the backup database. If the RJ link is not used by any other application or data group. MIMIX applies the entries to the original production database. Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the target system so they can be applied to the backup database if an unplanned switch is required. MIMIX checks whether the apply jobs are caught up. MIMIX prevents these repeat entries from being reapplied. The unconfirmed entries are the most recent changes to the data. default values 67 . In the event of an unplanned switch. MIMIX will restart the apply jobs to preserve these entries. If the apply process is ended by a user before the switch. Remote journaling causes the entries to be transmitted back to the backup system. Then. when a data group is ended or a planned switch occurs. As part of the unplanned switch processing. unconfirmed entries are applied to the target database and added to a journal that will be transferred to the source system when that system is brought back up. As a result. the link should be ended to prevent communications and processing overhead. When you are temporarily running production applications on the backup system after a planned switch. journal entries generated on the backup system are transmitted to the remote journal receiver (which is on the production system). When the backup system is brought online as the temporary the source system. you need to consider the implications of sharing an RJ link. the RJ link remains active. new journal entries are created for the entries that were just applied. once the original source system is operational these unconfirmed entries are the first entries replicated back to that system. If the RJ link is used by another application or data group. the RJ link must remain active.When remote journaling is used in an environment in which MIMIX Model Switch Framework is implemented. You need to consider whether to keep the original RJ link active after a planned switch of a data group. You may need to end the RJ link after a planned switch.used during a planned switch cause the RJ link to remain active. 68 . User journal replication of IFS objects. For more information. and data queues. data queues IBM provides journaling support for IFS objects as well as for data areas and data queues. data queues User journal replication of IFS objects. assume that a hotel uses a database application to reserve rooms. MIMIX replicates only changed bytes of the data for IFS objects. In addition to data group object entries and IFS entries. those using large files or making frequent small byte-level changes can be negatively impacted by the additional data transmission. A data group that replicates some or all configured IFS objects. data area. see “Planning for journaled IFS objects. This requires that the objects and database are configured to the same data group and the same database apply session. data areas. advanced journaling for IFS objects and data areas. data areas. see “Replication choices by object type” on page 88 You may need to consider how much data is replicated through the same apply session for user journal replication processes and whether any transactions need to be serialized with database files. and data queues is that transactions can be applied in lock-step with a database file. For example. and system journal processes for data queues and other library-based objects. and data queues” on page 79. the entire object is shipped across the communications link. or data queue changes. the 69 . data areas. a data group could be configured to support MIMIX Dynamic Apply for *FILE objects. For example. This support within MIMIX is often referred to as advanced journaling and is enabled by explicitly configuring data group object entries for data areas and data queues and data group IFS entries for IFS objects. Benefits of advanced journaling One of the most significant benefits of using advanced journaling is that IFS objects. Another significant benefit of using advanced journaling for IFS objects. a data area contains a counter to indicate the number of rooms reserved for a particular day and a database file contains detailed information about reservations. Each time an IFS object. data areas. This capability allows transactions to be journaled in the user journal (database journal). or data queues through a user journal may also replicate files from the same journal as well as replicate objects from the system journal. MIMIX enables you to take advantage of this capability of the IBM i when replicating these journaled objects. Each time a room is reserved. much like transactions are recorded for database record changes. data areas. data areas. If these updates do not occur in the same order on the target system. both the counter and the database file are updated. data areas. For more information. when IFS objects. While this may be sufficient for many applications. MIMIX uses tracking entries to uniquely identify each object that is configured for advanced journaling. and data queues are processed by replicating only changed bytes. only changed bytes are recorded in the journal entry. When these objects are configured to allow user journal replication. data areas. Within the application. For example. or data queues are replicated through the system journal. the MIMIX database reader process1. as long as the database file and data area are configured to be processed by the same apply session. • • • • • Restrictions and configuration requirements vary for IFS objects and data area or data queue objects. Processing time may be reduced. Database replication touches the user journal only. Additional benefits of replicating IFS objects. If one or more of the configuration requirements are not met. Changes to objects replicated from the user journal may be replicated to the target system in a more timely manner. 70 . MIMIX serializes these transactions on the target system by updating both the file and the data area through user journal processing. The objects replicated from the user journal can reduce burden on object replication processes when there is a lot of activity being replicated through the system journal. send. serialization of these transactions cannot not be guaranteed on the target system due to inherent differences in MIMIX processing from the user journal (database file) and the system journal (default for objects). the system journal replication path is used. and data queues are properly configured. In traditional object replication. even for equal amounts of data. updates occur on the target system in the same order they were originally made on the source system. Advanced journaling can be used in configurations that use either remote journaling or MIMIX source-send processes for user journal replication. For detailed information. and restore processes necessary for object replication. Database replication eliminates the separate save. and data queues from the user journal include: • Replication is less intrusive. including supported journal entry types. see “Identifying data areas and data queues for replication” on page 103 and “Identifying IFS objects for replication” on page 106. Replication processes used by advanced journaling When IFS objects.hotel risks reserving too many or too few rooms. Thus. the save/restore process places locks on the replicated object on the source system. 1. replication occurs through the user journal replication path. and one database apply process (session A). In traditional object replication. system journal replication processes must contend with potential locks placed on the objects by user applications. Processing occurs through the IBM i remote journal function. With advanced journaling. data areas. leaving the source object alone. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ support. data areas. Commitment control is supported for B journal entry types for IFS objects journaled to a user journal. Without advanced journaling. data areas. 71 . including the source and target file ID (FID). MIMIX requires a tracking entry for each of the eligible objects to identify how it is defined for replication and to assist with tracking status when it is replicated. while object tracking entries identify data areas or data queues. It is also possible for tracking entries to be automatically created. Error messages indicate that journaling must be started and the objects must be synchronized between systems. The data group configuration changes so that an object is no longer identified for replication using advanced journaling.User journal replication of IFS objects. start journaling for the objects which they identify.This can significantly increase the amount of time needed to start a data group. MIMIX places the tracking entries in *HLDERR state. and data queue that is replicated using advanced journaling. Once a tracking entry exists. this method has disadvantanges. it remains until one of the following occurs: • • The object identified by the tracking entry is deleted from the source system and replication of the delete action completes on the target system. the collection of data group object entries determines the subset of existing data areas and data queues on the source system that are eligible for replication using advanced journaling techniques. data area. When you initially configure a data group you must load tracking entries. After creating or changing data group IFS entries or object entries that are configured for advanced journaling. and synchronize the objects with the target system. The collection of data group IFS entries for a data group determines the subset of existing IFS objects on the source system that are eligible for replication using advanced journaling techniques. However. Similarly. IFS tracking entries identify IFS stream files. tracking entries are created the next time the data group is started. The same is true when you add new or change existing data group IFS entries or object entries. If the objects you intend to replicate with advanced journaling are not journaled before the start request is made. data queues Tracking entries A unique tracking entry is associated with each IFS object. The FID is machine-specific. and the resultant list of tracking entries created by MIMIX. You can also perform operations on tracking entries. When the data group is switched. to address replication problems. Their status is included with other data group status. such as holding and releasing. and member name) in the journal entries. the include and exclude processing selected for objects within that structure. 72 . whether the objects are journaled. when dealing with objects and database files. it is impractical to put the name of the IFS object in the header of the journal entry due to potentially long path names.Figure 5 shows an IFS user directory structure. IFS object file identifiers (FIDs) Normally. library name. the source and target FID associations are reversed. The FID is used to identify IFS objects in journal entries. allowing MIMIX to successfully replicate transactions to IFS objects. meaning that IFS objects with the same path name may have different FIDs on different systems. You also can see what objects they identify. For IFS objects. you have the ability of seeing the name of the object (filename. IFS tracking entries produced by MIMIX Viewing tracking entries is supported in both 5250 emulator and MIMIX Availability Manager interfaces. Figure 5. Each IFS object on a system has a unique 16-byte file ID (FID). and their replication status. MIMIX tracks the FIDs for all IFS objects configured for replication with advanced journaling via IFS tracking entries. the journal entries are filtered there by the apply process. Transactions are applied in realtime to generate a duplicate image of the files and data areas replicated from the source system. the database receive process transfers the data received over the communications line from the source system into a log space on the target system. it is better to use *ALL as the member name in that data group file entry. Using remote journaling support offers many benefits over using MIMIX source-send processes. if all journal entries are sent to the target system. If individual members are specified. Journal entries for which a match is found for the file and library are then transported to the target system for replication according to the DB journal entry processing parameter (DBJRNPRC) filtering specified in the data group definition. The journal definition default operation specifies that MIMIX automatically create the next journal receiver when the journal receiver reaches the threshold size you specified in the journal definition. reorganizations. After MIMIX finishes reading the entries from the current journal receiver. also indicates whether to send only the after-image of the change or both before-image and after-images. only those members you identify are processed. MIMIX source-send processing for database replication and the data area poller process. On the target system. deletes) or file level (clears. library. MIMIX uses multiple apply processes in parallel for maximum efficiency. Throughout this process.Lesser-used processes for user journal replication Lesser-used processes for user journal replication This topic describes two lesser used replication processes. transactions are applied at record level (puts. and member level. the database send process collects data from journal entries on the source system and compares them to the data group file entries defined for the data group. Note: If an application program adds or removes members and all members within the file are to be processed by MIMIX. As journal entries are added to the journal receiver. User journal replication with source-send processing This topic describes user journal replication when data groups are configured to use MIMIX source-send processes. MIMIX uses journaling to identify changes to database files and other journaled objects to be replicated. it deletes this receiver (if configured to do so) 73 . member deletes). For database files. Alternatively. The matching for the apply process is at the file. The database apply process applies replicated database transactions from the log space to the appropriate database physical file member or data area on the target system. updates. MIMIX manages the journal receiver unless you have specified otherwise. Note: New data groups are created to use remote journaling support for user journal replication when shipped default values on commands are used. specified either at the data group level or on individual data group file entries. The Data group file entries (FEOPT) parameter. Files can also be put on hold (*HLD) manually. The data group file entry can also specify a particular apply session to use for processing on the target system. This file entry option can be specified on the data group definition or on individual data group entries. MIMIX puts the member in hold error (*HLDERR) status so that no further transactions are applied. MIMIX supports the following data area types: Table 5. equal to 1 byte. up to 24 bytes in length and 9 decimal positions logical. This eliminates excessive use of disk storage and allows valuable system resources to be available for other processing. You define a data group data area entry for each data area that you want MIMIX to manage. data group file entries identify additional information used by database processes. When a data group is configured to use the data area polling process. If you expect to synchronize files at a later time. polling programs capture changes to data areas defined to the data group at specified intervals. MIMIX checks for changes to the data area type and length as well as to the contents of the data area. journal entries for the file in the log spaces are deleted and additional entries received from the target system are discarded. The file entry option Lock member during apply indicates whether or not to allow only restricted access (read-only) to the file on the backup system.and begins reading entries from the next journal receiver. the data area polling process retrieves the data area and converts it into a journal entry. If a data area has changed. The data area polling process Note: The preferred way to replicate data areas is through the user journal. it is better to put the file in an ignored state. Besides indicating the mapping between source and target file names. character. If a replication problem is detected. Putting a file on hold causes MIMIX to retain all journal entries for the file in log spaces on the target system. This process retrieves each data area defined to a data group at the interval you specify and determines whether or not a data area has changed. *CHAR *DEC *LGL Data area types supported by the data area polling process. MIMIX creates a journal entry when there is a change to a data area. This keeps the log spaces to a minimal size and improves efficiency for the apply process. Data areas can alternatively be replicated through system journal replication processes or with the data area poller. The data group definition determines how frequently the polling programs check for changes to data areas. up to 2000 bytes decimal. By setting files to an ignored state. The data area polling process runs on the source system. This 74 . A status code in the data group file entry also stores the status of the file or member in the MIMIX process. Lesser-used processes for user journal replication journal entry is sent through the normal user journal replication processing and is used to update the data area on the target system. the data area polling process will capture the new attributes and recreate the data area on the target system. For example. if a data area that is defined to MIMIX is deleted and recreated with new attributes. 75 . General security considerations for all MIMIX products can be found in the License and Availability Manager book. Information about setting security for common functions is also found in the License and Availability Manager book. data areas.Preparing for MIMIX CHAPTER 3 Preparing for MIMIX This chapter outlines what you need to do to prepare for using MIMIX. “Accessing the MIMIX Main Menu” on page 84 describes the MIMIX Main Menu and its two assistance levels. Give special attention to planning and implementing security for MIMIX. The topics in this chapter include: • “Checklist: pre-configuration” on page 77 provides a procedure to follow to prepare to configure MIMIX on each system that participates in a MIMIX installation. but now you must consider the security implications of common functions used by each product. Preparing for the installation and use of MIMIX is a very important step towards meeting your availability management requirements. • • • • 76 . it is best to determine IBM System i requirements for user journal and system journal processing in the context of your total MIMIX environment. data areas. and data queues” on page 79 describes considerations when planning to use advanced journaling for IFS objects. “Data that should not be replicated” on page 78 describes how to consider what data should not be replicated. “Planning for journaled IFS objects. or data queues. Each product has its own productlevel security. In addition. “Starting the MIMIXSBS subsystem” on page 83 describes how to start the MIMIXSBS subsystem which all MIMIX products run in. you can make your systems more secure with MIMIX product-level and command-level security. basic and intermediate which provide options to help simplify daily interactions with MIMIX. Because of their shared functions and their interaction with other MIMIX products. Do the following: 1. 6. 3. beginning with the management system. 2. 4. Once you complete the configuration process you choose. Verify any exit programs that are called by MIMIX. • • • • 77 . you may need to write exit programs for monitoring activity and you may want to ensure that your monitor definitions are replicated. At this time. If it is not already active. Update any automation programs you use with MIMIX and verify their operation. Verify the configuration. The chapter “Configuration checklists” on page 123 identifies the primary options you have for configuring MIMIX. see the Using MIMIX Monitor book. For more information about switching and policies. By now. you may also need to do one or more of the following: • If you plan to use MIMIX Monitor in conjunction with MIMIX. If you plan to use switching support. In order to use MIMIX Switch Assistant. a default model switch framework must be configured and identified in MIMIX policies. For detailed information see the chapter “Planning choices and details by object class” on page 86. 5. Configure each system in the MIMIX installation.Checklist: pre-configuration Checklist: pre-configuration You need to configure MIMIX on each system that participates in a MIMIX installation. see the Using MIMIX book. For more information about MIMIX Model Switch Framework. Decide what replication choices are appropriate for your environment. start the MIMIXSBS subsystem using topic “Starting the MIMIXSBS subsystem” on page 83. you should review the information in “Data that should not be replicated” on page 78. See the Using MIMIX book for more information. you should have completed the following tasks: • • The checklist for installing MIMIX software in the License and Availability Manager book You should have also turned on product-level security and granted authority to user profiles to control access to the MIMIX products. you or your Certified MIMIX Consultant may need to take additional action to set up and test switching. MIMIXQGPL. Objects that are in these libraries must be placed in a different library before installing software. or MIMIX data libraries. where x is a letter or number. Note: MIMIX is the default name for the MIMIX installation library -. MIMIX environment . • System environment . MIMIX data libraries are associated with a MIMIX installation library and have names in the format installation-library-name_x. but you also need to consider data that should not be replicated. work files. Job descriptions.Consider the following: • Do not replicate the LAKEVIEW library. any MIMIX installation libraries.As you identify your critical data. • Do not place user created objects or programs in LAKEVIEW. If you place such objects or programs in these libraries. including DLOs and stream files. QSYSOPR and QSECOFR should not be replicated. MIMIX installation libraries. can continue to be placed into the MIMIXQGPL library.Consider the following: • • Do not replicate system user profiles from one system to another. and temporary objects. and other objects for System i typically begin with the prefix letter Q.the library in which MIMIX ha1 or MIMIX ha Lite is installed. files. the MIMIXQGPL library. User environment . 78 . consider the following: • You may not need to replicate temporary files. Do not replicate IBM System i objects from one system to another. This includes any programs created as part of your MIMIX Model Switch Framework.Data that should not be replicated There are some considerations to keep in mind when defining data for replication. Evaluate how your applications use such files to determine if they need to be replicated. they may be deleted during the installation process. such as the MIMIX Port job. Do not replicate the LAKEVIEW or MIMIXOWN user profiles. or any MIMIX data libraries. For example. Not only do you need to determine what is critical to replicate. IBM-supplied libraries. What release of IBM i is in use? On some operating system releases. Since MIMIX uses apply session A for all objects configured for advanced journaling. These considerations affect whether journals should be shared. the types of operations that can be replicated from a user journal are limited. Examples of more dynamic objects include temporary objects. data queue objects. or data queues. see “Identifying data areas and data queues for replication” on page 103 and “Identifying IFS objects for replication” on page 106. or data queue objects using user journal replication processes. If you require serialization. data areas. • The benefits of user journal replication are described in “Benefits of advanced journaling” on page 69. data areas. which are created. it may not be appropriate for your environment. whether objects should be replicated in a data group shared with database files. In addition to configuration and journaling requirements and the restrictions that apply. serialization may require that you change the configuration for database files to ensure that they use the same apply session. and data queues Planning for journaled IFS objects. Is user journal replication appropriate for your environment? While user journal replication has significant advantages. data areas. may be better suited to replication from the system journal. data areas. while their data may change. session A. or data queues) can be serialized with one another when they are applied to objects on the target system. Serialized transactions with database files Transactions completed for database files and objects (IFS objects. See “Database apply session balancing” on page 81. and whether exit programs need to be updated. these objects and database files must share the same data group as well as the same database apply session.Planning for journaled IFS objects. whether configuration changes are needed to change apply sessions for database files. and deleted frequently. Consider the following: • Do the objects remain relatively static? Static objects typically persist after they are created. Load balancing may also become a concern. Objects for some applications. Converting existing data groups When converting an existing data group consider the following: 79 . For restrictions and limitations. and data queues You can choose to use the cooperative processing support within MIMIX to replicate any combination of journaled IFS objects. Or. renamed. you need to address several other considerations when planning to replicate journaled IFS objects. it may be appropriate for only some of the supported object types. like those which heavily use *DTAQs. The IBM i release in use may influence whether objects are considered static or dynamic for replication purposes. Changing the replication mechanism of IFS objects. you change the data group object entry to have the following values: LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ) When the data group is started. consider the overall replication performance and throughput requirements when choosing a configuration. You will need the data group object entry described in Example 1 80 . Example 2 . assume that the systems defined to data group KEYAPP are running on IBM i V5R4. data areas. • • • Conversion examples To illustrate a simple conversion. data areas. This may include creating entries that exclude objects from replication. XYZ. When converting an existing data group to use advanced journaling. or data queue objects configured for advanced journaling to an existing database replication environment may increase replication activity and affect performance. Adding IFS. However. must remain replicated from the system journal. data area. You may need to create additional data group IFS or object entries in order to achieve the desired results. improves replication latency. or data queues from system journal replication to user journal replication generally reduces bandwidth consumption.• You may have previously used data groups with a Data group type (TYPE) value of *OBJ to separate replication of IFS. You have confirmed that the data group definition specifies TYPE(*ALL) and does not need to change. The data group has one data group object entry which has the following values: LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE) Example 1 . all objects in the IFS path or the library specified that match the selection criteria are selected. data area. You use this data group for system journal replication of the objects in library PRODLIB. and eliminates the locking contention associated with the save and restore process. or data queues to the replication environment will increase bandwidth consumption and processing workload. object tracking entries are loaded and journaling is started for the data area and data queue objects in PRODLIB. If a large amount of data is to be replicated. The data group definition and existing data group entries must be changed to the values required for advanced journaling. After performing a controlled end of the data group. the addition of IFS byte stream files. Those objects will now be replicated from a user journal.You decide to use advanced journaling for all *DTAARA and *DTAQ objects replicated with data group KEYAPP.You want to use advanced journaling for data group KEYAPP but one data area. Converting these data groups to use advanced journaling will not cause problems with the data group. if these objects have never been replicated. Any other object types in PRODLIB continue to be replicated from the system journal. or data queue objects from other activity. In data groups that also replicate IFS objects. The apply session can be changed in the file entry options (FEOPT) on the data group object and file entries. or data queues configured for replication processing from a user journal. new journal entry codes are provided to the user exit program. If the user exit program interprets the journal code. and data queues LIB1(PRODLIB) OBJ1(*ALL) OBJTYPE(*ALL) PRCTYPE(*INCLD) COOPDB(*YES) COOPTYPE(*FILE *DTAARA *DTAQ) You will also need a new data group object entry that specifies the following so that data area XYZ can be replicated from the system journal: LIB1(PRODLIB) OBJ1(XYZ) OBJTYPE(*DTAARA) PRCTYPE(*INCLD) COOPDB(*NO) Database apply session balancing In each data group. For more information see “Considerations for LF and PF files” on page 96. changes may be required. User exit programs that rely on the library and file names in the journal entry may need to be changed to either ignore IFS journal entries or process them by resolving the FID to a path name using the IBM-supplied APIs. The path name for IFS objects cannot be interpreted in the same way as it can for database files. newly created database files are distributed to apply session A by default. In configurations using legacy cooperative processing. data areas. • • User exit program considerations When new or different journaled object types are added to an existing data group. one database apply session. data areas. Consider the following: • In MIMIX Dynamic Apply configurations. data areas or data queues through the user journal. it may be necessary to change the apply session to which cooperatively processed files are directed when the database files are created to prevent apply session A from becoming overloaded. This ensures that the files are distributed in a way that will not overload any one apply session. session A. For • • 81 . newly created database files are distributed evenly across database apply sessions by default. is used for all IFS objects. the way in which files are configured for replication can also affect how much data is processed by apply session A. user exit programs may be affected. data areas. you may need adjust the configured apply session in data group object and file entries to either ensure that files that should be serialized remain in the same apply session or to move files to another apply session to manually balance loads. • When IFS objects. In some cases. Logical files and physical files with referential constraints also have apply session requirements to consider. or data queues are journaled to a user journal. If you also replicate database files in the same data group. data areas. MIMIX uses the file ID (FID) to identify the IFS object being replicated. Be aware of the following exit program considerations when changing an existing configuration to include IFS objects.Planning for journaled IFS objects. Journaled IFS objects and data queues can have incomplete journal entries. and data queues replicated from a user journal. data areas. original journal entry. This may be a performance consideration relative to user exit program design. 82 .incomplete journal entries. Contact your Certified MIMIX Consultant for assistance with user exit programs. • Journal entries for journaled IFS objects. MIMIX provides two or more journal entries with duplicate journal entry sequence numbers and journal codes and types to the user exit program when the data for the incomplete entry is retrieved. and data queues will be routed to the user exit program. Programs need to correctly handle these duplicate entries representing the single. Note: You can ensure that the MIMIX subsystem is started after each IPL by adding this command to the end of the startup program for your system. Due to the unique requirements and complexities of each MIMIX implementation. If the MIMIXSBS is not already active. all MIMIX products run in the MIMIXSBS subsystem that is created when you install the product. it is strongly recommended that you contact your Certified MIMIX Consultant to determine the best way in which to design and implement this change. start the subsystem by typing the command STRSBS SBS(MIMIXQGPL/MIMIXSBS) and pressing Enter.Starting the MIMIXSBS subsystem Starting the MIMIXSBS subsystem By default. 83 . This subsystem must be active before you can use the MIMIX products. Any autostart job entries listed in the MIMIXSBS subsystem will start when the subsystem is started. do the following: 1. with its options designed to simplify day-to-day interaction with MIMIX. as follows: Type the command library-name/MIMIX and press Enter. you can use the name to library-qualify the command. which can only be selected from the MIMIX Basic 84 . In either assistance level. If you do not know the name of the library. the available options also depend on the MIMIX products installed in the installation library and their licensing. Type the command LAKEVIEW/WRKPRD and press Enter. Type a 9 (Display product menu) next to the product in the library you want on the Lakeview Technology Installed Products display and press Enter. The command defaults to the basic assistance level. The default name of the installation library is MIMIX.If you know the name of the MIMIX installation you want. The products installed and the licensing also affect subsequent menus and displays. Accessing the menu . Changing the assistance level . shown in Figure 6. 2. basic and intermediate. The options on the menu vary with the assistance level.Accessing the MIMIX Main Menu The MIMIX command accesses the main menu for a MIMIX installation. Figure 7 shows the intermediate assistance level.The F21 key (Assistance level) on the main menu toggles between basic and intermediate levels of the menu. Note: Procedures are written assuming you are using the MIMIX Availability Status (WRKMMXSTS) display. The MIMIX Main Menu has two assistance levels. You can also specify the the Assistance Level (ASTLVL) parameter on the MIMIX command. MIMIX Basic Main Menu MIMIX Basic Main Menu System: MIMIX Select one of the following: 1. 3. and synchronize menu 13. Inc. Utilities menu 31. We recommend you use the MIMIX Basic Main Menu unless you must access the MIMIX Intermediate Main Menu. Start MIMIX 3. Start or complete switch 11. End MIMIX 5. 2008.. 1990. Product management menu WRKMMXSTS SYSTEM1 WRKMON WRKMSGLOG LAKEVIEW/PRDMGT Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Vision Solutions. 2008. Figure 6. Configuration menu 12. MIMIX Intermediate Main Menu MIMIX Intermediate Main Menu System: SYSTEM1 MIMIX Select one of the following: 1. Work Work Work Work with with with with data groups systems messages monitors WRKDG WRKSYS WRKMSGLOG WRKMON 11. Figure 7. Configuration menu 12. 1990. verify. Work with monitors 13. Inc. 2.Accessing the MIMIX Main Menu Main Menu.. Work with messages 31. Compare. Availability status 2. 85 . Product management menu LAKEVIEW/PRDMGT Selection or command ===>__________________________________________________________________________ ______________________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F21=Assistance level F12=Cancel (C) Copyright Vision Solutions. 4. “Configured object auditing value for data group entries” on page 89 describes how MIMIX uses a configured object auditing value that is identified in data group entries and when MIMIX will change an object’s auditing value to match this configuration value. each data group entry you create identifies one or more objects to be considered for replication or to be explicitly excluded from replication. MIMIX evaluates all of the data group entries for the class to which the object belongs. such as how MIMIX interprets the data group object entries defined for a data group. and configuration requirements of MIMIX Dynamic Apply and legacy cooperative processing. This topic also provides examples and additional detail about configuring entries to replicate spooled files and user profiles. Each class of information is identified for replication by a corresponding class of data group entries. “Identifying IFS objects for replication” on page 106 identifies supported and unsupported file systems. and document library objects (DLOs). A data group can have any combination of data group entry classes. limitations. When you configure MIMIX. “Identifying data areas and data queues for replication” on page 103 identifies the replication choices and configuration requirements for library-based objects of type *DTAARA and *DTAQ. If the object is within the name space determined by the existing data group entries. When determining whether to replicate a journaled transaction.Planning choices and details by object class CHAPTER 4 Planning choices and details by object class This chapter describes the replication choices available for objects and identifies critical requirements. Some classes even support multiple choices for replication. “Identifying logical and physical files for replication” on page 96 identifies the replication choices and considerations for *FILE objects with logical or physical file extended attributes. library-based objects. This topic also identifies restrictions for replication of these object types when user journal processes (advanced journaling) is used. This topic also identifies • • • • 86 . In each class. limitations. This topic identifies the requirements. replication choices. the transaction is replicated. integrated file system (IFS) objects. “Identifying library-based objects for replication” on page 91 includes information that is common to all library-based objects. and configuration considerations for those choices. The topics in this chapter include: • • “Replication choices by object type” on page 88 identifies the available replication choices for each object class. and considerations such as long path names and case sensitivity for IFS objects. Many MIMIX processes are customized to provide optimal handling for certain classes of related object types and differentiate between database files. a data group entry identifies a source of information that can be replicated by a specific data group. restrictions and configuration requirements for replication of these object types when user journal processes (advanced journaling) is used. and restore operations. • “Identifying DLOs for replication” on page 111 describes how MIMIX interprets the data group DLO entries defined for a data group and includes examples for documents and folders. delete. data queues. “Processing of newly created files and objects” on page 114 describes how new IFS objects. and files that have journaling implicitly started are replicated from the user journal. “Processing variations for common operations” on page 117 describes configuration-related variations in how MIMIX replicates move/rename. • • 87 . data areas. a new configuration of MIMIX that uses shipped defaults for all configuration choices will use remote journaling support for replication from user journals.Replication choices by object type With version 5. system journal) Default: For other files. Table 6. extended attributes: • PF (data. legacy cooperative processing2. User journal replication can be configured for either remote journaling or MIMIX source-send processes. Replication choices by object class Replication Options Default: user journal with MIMIX Dynamic Apply1 Other: For PF data files. (For PF source and LF files. system journal Default: system journal Other: advanced journaling2 Other: Data area polling process associated with user journal2 Objects of type *DTAQ Default: system journal Other: advanced journaling2 Default: system journal Required Classes of DG Entry Object entries and File entries Object entries and File entries More Information “Identifying logical and physical files for replication” on page 96 Object Class and Type Objects of type *FILE. You can optionally use other replication processes as described in Table 6. Default configuration choices result in physical files (data and source) as well as logical files to be processed through user journal replication and all other supported object types and classes to be replicated using system journal replication. Existing data groups can be converted to this method of cooperative processing. source) • LF • *FILE. other extended attributes Objects of type *DTAARA Object entries “Identifying library-based objects for replication” on page 91 “Identifying data areas and data queues for replication” on page 103 Object entries Object entries and Object tracking entries Data area entries Object entries Object entries and Object tracking entries Object entries “Identifying library-based objects for replication” on page 91 “Identifying IFS objects for replication” on page 106 Other library-based objects IFS objects Default: system journal Other: advanced journaling2 IFS entries IFS entries and IFS tracking entries DLO entries DLOs 1. 88 . Default: system journal “Identifying DLOs for replication” on page 111 New data groups are created to use remote journaling and to cooperatively process files using MIMIX Dynamic Apply. 2. For files configured for MIMIX Dynamic Apply and any IFS objects. Specifically. the value *NONE can improve MIMIX performance by preventing unneeded entries from being written to the system journal. 89 . you can specify an object auditing value within the configuration. ensures that all changes to the object by all users are recorded in the system journal. This configured value is associated with all objects identified for processing by the data group entry. The Object auditing value (OBJAUD) parameter defines a configured object auditing level for use by MIMIX. The value *ALL ensures that all changes or read accesses to the object by all users are recorded in the system journal. The value *NONE results in no entries recorded in the system journal when the object is accessed or changed. MIMIX sets the object to the configured value so that future changes to the object will be recorded as expected in the system journal and therefore can be replicated. Changes that generate other types of journal entries are not affected by this parameter. This configured object auditing value affects how MIMIX handles changes to attributes of objects. Note: MIMIX only considers changing an object’s auditing value when the data group object entry is configured for system journal replication. An object’s actual auditing level determines the extent to which changes to the object are recorded in the system journal and replicated by MIMIX. *CHANGE. IFS objects. The configured value is used during initial configuration and during processing of requests to compare objects that are identified by configuration data. If the actual value is lower than the configured value. When MIMIX changes the audit level. In specific scenarios. MIMIX does not change the object’s value for files that are configured for MIMIX Dynamic Apply or legacy cooperative processing or for data areas and data queues that are configured for user journal replication. the possible values have the following results: • • The default value. objects configured for system journal replication.Configured object auditing value for data group entries Configured object auditing value for data group entries When you create data group entries for library-based objects. or data queues configured for user journal replication. The value *NONE prevents replication of attribute and data changes for the identified object or DLO because T-ZC and T-YC entries are not recorded in the system journal. data areas. The journal entries generated by read accesses to objects are not used for replication and their presence can adversely affect replication performance. but not limited to. • The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries. or DLOs. It is particularly important for. MIMIX evaluates whether an object’s auditing value matches the configured value of the data group entry that most closely matches the object being processed. the configured value can affect replication of T-ZC journal entries for files and IFS objects and T-YC entries for DLOs. The configured value specified in data group entries can affect replication of some journal entries generated when an object attribute changes. For more information about manually setting values and examples. see “Managing object auditing” on page 55.#IFSATR audit” on page 569 90 .#FILATR.#OBJATR audit” on page 561 – “Attributes compared and expected results . To see what attributes can be compared and replicated.#DLOATR audit” on page 571. see “Setting data group auditing values manually” on page 270. You may also want to read the following: • • • For more information about when MIMIX sets an object’s auditing value. – “Attributes compared and expected results . #FILATRMBR audits” on page 556 – “Attributes compared and expected results . any differences found for attributes that could generate T-ZC or T-YC journal entries are reported as *EC (equal configuration).When a compare request includes an object with a configured object auditing value of *NONE. see the following topics: – “Attributes compared and expected results . Each method varies in its efficiency. *DTAARA. See “Identifying logical and physical files for replication” on page 96 for additional details. Also. the object entries identify which library-based objects can be replicated by a particular data group. MIMIX caches extended file attribute information for a fixed set of *FILE objects. and *DTAQ. An object entry can specify either a specific or a generic name for the library and object. For *FILE objects. and in additional configuration requirements. in its supported extended attributes. see “Caching extended attributes of *FILE objects” on page 313 and “Omitting T-ZC content from system journal replication” on page 350. Replication options for object types journaled to a user journal . defines a configured object auditing level for the identified objects. and indicates whether the identified objects are to be included in or excluded from replication. For most supported object types which can be identified by data group object entries. A configuration that uses the user journal is also called an advanced journaling configuration. This list includes information about what can be specified for the extended attributes of *FILE objects. • For logical and physical files. For a list of object types. In addition. the extended attribute and other configuration data are considered when MIMIX determines what replication path to use for identified objects. A limited number of object types which use the system journal replication path have unique configuration requirements.For objects of type *FILE. Each data group object entry identifies one or more library-based objects. MIMIX supports only system journal replication. For more information. These are described in are described in “Identifying spooled files for replication” on page 93 and “Replicating user profiles and associated message queues” on page 95. For these object types. the Omit content (OMTDTA) parameter provides the ability to omit a subset of data-changing operations from replication. MIMIX supports replication using either system journal or user journal replication processes.Identifying library-based objects for replication Identifying library-based objects for replication MIMIX uses data group object entries to identify whether to process transactions for library-based objects. MIMIX supports several methods of replication. see “Configuring data group entries” on page 241. For detailed procedures. • For *FILE objects configured for replication through the system journal. additional configuration data is evaluated when determining what replication path to use for the identified objects. each object entry also identifies the object types and extended object attributes (for *FILE and *DEVD objects) to be selected. Collectively. only the system journal replication path is available. MIMIX supports multiple replication methods. For *DTAARA and *DTAQ object types. Only data group object entries are required to identify these files for replication. Additional information. see “Supported object types for system journal replication” on page 505. including configuration requirements are described in “Identifying data areas and data queues for replication” on page 103. For other extended attribute types. 91 . The data group object entries are checked from the most specific to the least specific.How MIMIX uses object entries to evaluate journal entries for replication The following information and example can help you determine whether the objects you specify in data group object entries will be selected for replication. Library Name Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Exact Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Generic* Object Type Exact Exact Exact Exact Exact Exact *ALL *ALL *ALL *ALL *ALL *ALL Exact Exact Exact Exact Exact Exact *ALL *ALL *ALL *ALL *ALL *ALL Attribute1 Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Exact Exact Exact *ALL *ALL *ALL Object Name Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Exact Generic* *ALL Search Order 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1. with the most significant (library name) at left and the least significant (object name) at right. 92 . The most significant match found (if any) is checked to determine whether to include or exclude the journal entry in replication. and the object name. When determining whether to process a journal entry for a library-based object. attribute (for files and device descriptions). The columns are arranged to show the priority of the elements within the object entry. Table 7. MIMIX looks for a match between the object information in the journal entry and one of the data group object entries. The library name is the first search element. then followed by the object type. MIMIX determines which replication process will be used only after it determines whether the library-based object will be replicated. The extended object attribute is only checked for objects of type *FILE and *DEVD. Matching order for library-based object names. Table 7 shows how MIMIX checks a journal entry for a match with a data group object entry. an Exclude entry. an exact match for object type. MIMIX will replicate this object since it fits the criteria of the first data group object entry shown in Table 9. When an output queue (*OUTQ) is identified for replication by a data group object entry. Example . Although the transaction fits both the second and third entries shown in Table 9. say you that you have a data group configured with data group object entries like those shown in Table 9. Identifying spooled files for replication MIMIX supports spooled file replication on an output queue basis. a transaction for data area ACCOUNT1 in library FINANCE would not be replicated. A transaction for file ACCOUNTG in library FINANCE would also be replicated since it fits the third entry. Table 10 identifies the values required for spooled file replication. Entry 1 2 3 Sample of data group object entries.Identifying library-based objects for replication When configuring data group object entries. the flexibility of the generic support allows a variety of include and exclude combinations for a given library or set of libraries. Table 8. generic name support can also cause unexpected results if it is not well planned. A transaction for data area BALANCE in library FINANCE would not be replicated since it fits the second entry.For example. When MIMIX processes an output 93 . its spooled files are not automatically replicated when default values are used. Consider the search order shown in Table 7 when configuring data group object entries to ensure that objects are not unexpectedly included or excluded in replication. But. Table 9. The journal entries MIMIX is evaluating for replication are shown in Table 8. arranged in order from most to least specific Source Library Finance Finance Finance Object Type *PGM *DTAARA *ALL Object Name *ALL *ALL acc* Attribute *ALL *ALL *ALL Process Type *INCLD *EXCLD *INCLD Likewise. The second entry provides an exact match for the library name. an additional data group object entry with process type *INCLD could be added for object type of *DTAARA with an exact name of ACCOUNT1 or a generic name ACC*. and a object name match to *ALL. In order for MIMIX to process the data area ACCOUNT1. the second entry determines whether to replicate because it provides a more significant match in the second criteria checked (object type). Sample journal transactions for objects in the system journal Library FINANCE FINANCE FINANCE FINANCE Object BOOKKEEP ACCOUNTG BALANCE ACCOUNT1 Object Type *PGM *FILE *DTAARA *DTAARA A transaction is received from the system journal for program BOOKKEEP in library FINANCE. *HLD All replicated spooled files are put on hold on the target system regardless of their status on the source system. Most likely. you want to limit the spooled files that you replicate to mission-critical information. moved. all spooled files for the output queue (*OUTQ) are replicated by system journal replication processes. Additional choices for spooled file replication MIMIX provides additional options to customize your choices for spooled file replication. or its attributes are changed. Keeping deleted spooled files: You can also specify to keep spooled files on the target system after they have been deleted from the source system by using the Keep deleted spooled files parameter on the data group definition. the resulting entries in the system journal are processed by a MIMIX object send job and are replicated. The Spooled file options (SPLFOPT) parameter is only available on commands to add and change data group object entries. deleted. Spooled files on the source system which have other status values will have the same status on the target system. MIMIX ensures that the values *SPLFDTA and *PRTDTA are included in the system value for the security auditing level (QAUDLVL). This parameter can be helpful if your environment includes programs which automatically process spooled files on the target system. if you have a 94 . For example. Some output queues contain a large number of non-critical spooled files and probably should not be replicated. When a spooled file is created. When an output queue is selected for replication and the data group object entry specifies *YES for Replicate spooled files. *HLDONSAV All replicated spooled files that have a saved status on the source system will be put on hold on the target system. This causes the system to generate spooled file (T-SF) entries in the system journal. Table 10. It may be useful to direct important spooled files that should be replicated to specific output queues instead of defining a large number of output queues for replication. The parameter is also available on commands to add and change data group object entries. Options for spooled file status: You can specify additional options for processing spooled files. Spooled files on the target system will have the same status as on the source system.queue that is identified by an object entry with the appropriate settings. The following values support choosing how status of replicated spooled files is handled on the target system: *NONE This is the shipped default value. Data group object entry parameter values for spooled file replication Value *ALL or *OUTQ *YES Parameter Object type (OBJTYPE) Replicate spooled files (REPSPLF) Is it important to consider which spooled files must be replicated and which should not. For example. Entry 1 2 3 4 Sample data group object entries for maintaining private authorities of message queues associated with user profiles Source Library QSYS QUSRSYS QSYS QUSRSYS Object Type *USRPRF *MSGQ *USRPRF *MSGQ Object Name A* A* ABC ABC Process Type *INCLD *INCLD *EXCLD *EXCLD 95 . the user profile ABC and its associated message queue are excluded from replication. MIMIX replicates the objects using system journal replication processes.Identifying library-based objects for replication program that automatically prints spooled files. the message queue (*MSGQ) objects associated with the *USRPRF objects may also be created automatically on the target system as a result of replication. Table 11 shows the data group object entries required to replicate user profiles beginning with the letter A and maintain identical private authorities on associated message queues. Table 11. If the *MSGQ objects are not also configured for replication. If you move a spooled file between output queues which have different configured values for the SPLFOPT parameter. you may want to use one of these values to control what is printed after replication when printers writers are active. consider the following: • Spooled files moved from an output queue configured with SPLFOPT(*NONE) to an output queue configured with SPLFOPT(*HLD) are placed in a held state on the target system. it is recommended that *MSGQ objects associated with *USRPRF objects be configured for replication. In this example. If it is necessary for the private authorities for the *MSGQ objects be identical between the source and target systems. • Replicating user profiles and associated message queues When user profile objects (*USRPRF) are identified by a data group object entry which specifies *ALL or *USRPRF for the Object type parameter. the private authorities for the *MSGQ objects may not be the same between the source and target systems. When MIMIX replicates user profiles. Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV) remain in a held state on the target system until you take action to release them. PF-SRC. The entire member is updated with each replicated transaction. record data and member data operations are replicated through user journal processes. In this configuration. In this configuration. and PF38-SRC files. The following configurations are possible: • MIMIX Dynamic Apply . moves.Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). files are identified by data group object entries and file entries. In this configuration. It does not support source physical files or logical files. Legacy cooperative processing . User journal (database) only configurations . files are identified by data group object entries and file entries. For this information. see “Configured object auditing value for data group entries” on page 89 and “How MIMIX uses object entries to evaluate journal entries for replication” on page 92. while all other file transactions such as creates. PF-DTA. moves.MIMIX Dynamic Apply is strongly recommended. The database processes can use either remote journaling or MIMIX source-send processes. renames. and deletes are replicated through system journal processes.Data group definitions which specify TYPE(*OBJ) are less efficient at processing logical and physical files. • • • You should be aware of common characteristics of replicating library-based objects.Identifying logical and physical files for replication MIMIX supports multiple ways of replicating *FILE objects with extended attributes of LF. For detailed procedures. MIMIX configuration data determines the replication method used for these logical and physical files. This configuration is the most efficient way to replicate LF. PF38-SRC. files are identified by data group object entries. See “Configuring advanced replication techniques” on page 320 for additional information. see “Creating data group object entries” on page 242. files are identified by data group file entries. such when the configured object auditing value is used and how MIMIX interprets data group entries to identify objects eligible for replication. restores. and some copy operations.Environments that do not meet MIMIX Dynamic Apply requirements but which have data group definitions that specify TYPE(*DB) can only replicate data changes to physical files. PF-DTA. In legacy cooperative processing. logical files and physical files (source and data) are replicated primarily through the user (database) journal. PF-SRC. System journal (object) only configurations . Members must be closed in order for replication to occur. Some advanced techniques may require specific configurations. PF38-DTA. In this configuration. making legacy cooperative processing the recommended choice for physical data files when the remote journaling environment required by MIMIX Dynamic Apply is not possible. renames. PF38-DTA. In this configuration. These configurations may not be able to replicate other operations such as creates. Considerations for LF and PF files Newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command defaults are 96 . If a data group is configured for only system replication (TYPE is *OBJ). When a data group definition meets the requirements for MIMIX Dynamic Apply. Otherwise. data files are replicated using legacy cooperative processing if those requirements are met.Consider the following for physical files • Physical files (source and data) are replicated through the user journal when MIMIX Dynamic Apply requirements are met.The value specified for the Cooperative journal (COOPJRN) parameter in the data group definition is critical to determining how files are cooperatively processed. any logical files and physical (source and data) files properly identified for cooperative processing will be processed via MIMIX Dynamic Apply unless a known restriction prevents it. • • Logical files are replicated through the user journal when MIMIX Dynamic Apply requirements are met. If a data group definition specifies TYPE(*DB) and the configuration meets other MIMIX Dynamic Apply requirements. If a data group is configured for only user journal replication (TYPE is *DB) and does not meet other configuration requirements for MIMIX Dynamic Apply. Any data group object entries configured for cooperative processing will be replicated through the • • • 97 . When the MIMIX Dynamic Apply requirements are not met. MIMIX changes the value *DFT to *USRJRN.Identifying logical and physical files for replication used. source files should be identified by only data group file entries. When a data group definition does not meet the requirements for MIMIX Dynamic Apply but still meets legacy cooperative processing requirements. MIMIX changes *DFT to *SYSJRN. logical and physical files are processed primarily from the user journal. and source files are replicated through the system journal. any PF-DTA or PF38-DTA files properly configured for cooperative processing will be replicated using legacy cooperative processing. any source files should be identified by only data group object entries. With this configuration. they are replicated through the system journal.Consider the following for logical files. All other types of files are processed using system journal replication. The installation process sets the value of COOPJRN to *SYSJRN and this value remains in effect until you take action as described in “Converting to MIMIX Dynamic Apply” on page 133. Otherwise. When creating a new data group. When requirements are met. Physical file considerations . Logical file considerations . It is strongly recommended that logical files reside in the same data group as all of their associated physical files. Note: Data groups created prior to upgrading to version 5 continue to use their existing configuration. source files need to be identified by both data group object entries and data group file entries. you can explicitly specify a value or you can allow MIMIX to automatically change the default value (*DFT) to either *USRJRN or *SYSJRN based on whether operating system and configuration requirements for MIMIX Dynamic Apply are met. Cooperative journal . See “Requirements and limitations of MIMIX Dynamic Apply” on page 101 and “Requirements and limitations of legacy cooperative processing” on page 102 for additional information. commitment control maintains database integrity by not exposing a part of a database transaction until the whole transaction completes. LOBs can greatly increase the amount of data being replicated. For more information about load balancing apply sessions. MIMIX fully simulates commitment control on the target system. journal entries for database files containing unchanged LOB data may be complete and therefore processed like any other complete journal entry. When commitment control is used on a source system in a MIMIX environment. If the source system becomes unavailable. This technique is also useful in the event that a partially updated transaction must be removed.system journal and should not have any corresponding data group file entries. replication of the create operation may fail if a commit cycle is open when MIMIX tries to save the file. If minimized journal entry data is enabled. throughput. IBM support for minimized journal entry data can be extremely helpful when database records contain static. This ensures that there are no partial updates when the process is interrupted prior to the completion of the transaction.This database technique allows multiple updates to one or more files to be considered a single transaction. If minimized journal entry is used with files containing LOBs. The amount of degradation you see is proportionate to the amount of journal entries with LOBs that are applied per hour. All physical files in a referential constraint structure must be in the same database apply session. MIMIX maintains the integrity of the database on the target system by preventing partial transactions from being applied until the whole transaction completes. This can significantly improve performance. When used. use MIMIX Dynamic Apply for replication. MIMIX will not have applied incomplete transactions on the target system. This is also true during switch processing if you are using remote journaling and have unconfirmed entries with LOB data. In the event of an incomplete (or uncommitted) commitment cycle. or rolled back. Commitment control . keyed replication is not supported. For more information. see “Minimized journal entry data” on page 307. you may see some degradation in your replication activity. very large objects. • Physical files with referential constraints require a field in another physical file to be valid. see “Database apply session balancing” on page 81. The save operation will be delayed and may fail if the file being saved has uncommitted transactions. and storage requirements. you should consider using the minimized journal entry data function along with LOB replication. Since the volume of data to be replicated can be very large. If your application dynamically creates database files that are subsequently used in a commitment control environment. from the files or when updates identified as erroneous need to be removed. 98 . Files with LOBs Large objects (LOBs) in files that are configured for either MIMIX Dynamic Apply or legacy cooperative processing are automatically replicated. the integrity of the database is maintained. As a result. Without MIMIX Dynamic Apply. original journal entry. two or more entries with duplicate journal sequence numbers and journal codes and types will be provided to the user exit program when the data for the incomplete entry is retrieved and segmented. including entries which specify a Process type of *EXCLD. You may need to create additional entries to achieve the desired results. Non-minimized LOB data produces incomplete entries. • Configuration requirements for LF and PF files MIMIX Dynamic Apply and legacy cooperative processing have unique requirements for data group definitions as well as many common requirements for data group object entries and file entries. Most collision detection classes compare the journal entries with the content of the record on the target system. the F-RC entry generated by the IBM command Remove Journaled Changes (RMVJRNCHG) cannot be applied on the target system. • • 99 . The identified existing objects must be journaled to the journal defined for the data group. Although you can compare the actual content of the record. For incomplete journal entries. In both configurations. you cannot compare the content of the LOBs. Journaled changes cannot be removed for files with LOBs that are replicated by a data group that does not use remote journaling (RJLNK(*NO)).Identifying logical and physical files for replication User exit programs may be affected when journaled LOB data is added to an existing data group. Data group file entries for the items identified by data group object entries. Processing cannot occur without these corresponding data group file entries. You should also be aware of the following restrictions: • • Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work against database files with LOB fields. you must have: • • A data group definition which specifies the required values. One or more data group object entries that specify the required values. Programs need to correctly handle these duplicate entries representing the single. In this scenario. These entries identify the items within the name space for replication. There is no collision detection for LOB data. as indicated in Table 12. opts: (FEOPT) Replication type Data Group Object Entries Object type (OBJTYPE) Attribute (OBJATR) *YES *DFT or *USRJRN any value *DFT or *SYSJRN See cooperative journal is default. opts: (FEOPT) Replication type *YES *FILE See Corresponding data group file entries required. PF38SRC *YES *FILE *ALL or *FILE *ALL. *POSITION any value See “Requirements and limitations of MIMIX Dynamic Apply” on page 101. *POSITION any value *ALL or *FILE *ALL or one of the following: LF. the following are also required: • The object entry must enable the cooperative processing of files by specifying 100 . Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing MIMIX Dynamic Apply Required Values Legacy Cooperative Processing Required Values Configuration Notes Critical Parameters Data Group Definition Data group type (TYPE) *ALL or *DB *ALL See “Requirements and limitations of MIMIX Dynamic Apply” on page 101. PF38-DTA. PF-DTA. or PF38-DTA Cooperate with database (COOPDB) Cooperating object types (COOPTYPE) File and tracking ent. Corresponding data group file entries . When a file is identified by both a data group object entry and an data group file entry. PF-SRC.Both MIMIX Dynamic Apply and legacy cooperative processing require that existing files identified by a data group object entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must also be identified by data group file entries. PF-DTA. See “Requirements and limitations of MIMIX Dynamic Apply” on page 101. LF38.Table 12. Use remote journal link (RJLNK) Cooperative journal (COOPJRN) File and tracking ent. For example. the values specified in the data group file entry take precedence. The #DGFE audit can be used to determine whether corresponding data group file entries exist for the files identified by data group object entries.The following restrictions apply: 101 . Requirements and limitations of MIMIX Dynamic Apply MIMIX Dynamic Apply requires that user journal replication be configured to use remote journaling. Files in library .Identifying logical and physical files for replication COOPDB(*YES) and COOPTYPE(*FILE). the above actions are fully supported. Referential constraints . If you require object name mapping. the data group object entry and file entry must have the same name mapping defined. it is supported in legacy cooperative processing configurations.MIMIX Dynamic Apply configurations that specify TYPE(*DB) in the data group definition will not be able to replicate the following actions: • • • • Files created using CPYF CRTFILE(*YES) on OS V5R3 into a library configured for replication Files restored into a source library configured for replication Files moved or renamed from a non-replicated library into a replicated library Files created which are not otherwise journaled upon creation into a library configured for replication Files created by these actions can be added to the MIMIX configuration by running the #DGFE audit.It is recommended that files within a single library be replicated using the same user journal. • • If name mapping is used between systems. Files defined by data group file entries must have journaling started and must be synchronized. In data groups that specify TYPE(*ALL).Data group file entries (DGFE) for specific member names are not supported unless they are created by MIMIX. MIMIX cannot replicate activity for the file. MIMIX Dynamic Apply configurations have the following limitations. The audit recovery will synchronize the file as part of adding the file entry to the configuration. MIMIX may create these for error hold processing. MYLIB/MYOBJ mapped to MYLIB/OTHEROBJ is not supported. Entries with object name mapping are not supported. If the data group object entry and file entry specify different values for the File and tracking ent.MIMIX Dynamic Apply configurations support name mapping at the library level only. Specific data group definition and data group entry requirements are listed in Table 12. Name mapping . opts (FEOPT) parameter. data group object entries are created during initial configuration and are then used as the source for loading the data group file entries. Data group file entries for members . TYPE(*DB) data groups . If journaling is not started. • Typically. • Positional replication only . MIMIX may ignore the specification in order to satisfy this restriction.Legacy cooperative processing supports only data files (PF-DTA and PF38-DTA). All member and data changes are logged and replicated through user journal replication processes. MIMIX source-send processing for database replication is also supported. All physical files in a referential constraint structure must be in the same database apply session. 102 . All physical files in a referential constraint structure must be in the same apply session. When a *FILE object is configured for legacy cooperative processing. If a particular preferred apply session has been specified in file entry options (FEOPT).If a file is moved or renamed and both names are defined by a data group file entry. Requirements and limitations of legacy cooperative processing Legacy cooperative processing requires that data groups be configured for both database (user journal) and object (system journal) replication. Specific data group definition and data group entry requirements are listed in Table 12. While remote journaling is recommended.Physical files with referential constraints require a field in another physical file to be valid. The value *KEYED cannot be used. contact CustomerCare. Data group definitions. Physical files with referential constraints require a field in another physical file to be valid. Supported extended attributes . only file and member attribute changes identified by T-ZC journal entries with a subclass of 7=Change are logged and replicated through system journal replication processes. the file entry options must be the same in both data group file entries. File entry options . data group object entries. Referential constraints .Keyed replication is not supported by MIMIX Dynamic Apply.• If using referential constraints with *CASCADE or *SETNULL actions you must specify *YES for the Journal on target (JRNTGT) parameter in the data group definition. and data group file entries must specify *POSITION for the Replication type element of the file and tracking entry options (FEOPT) parameter. Legacy cooperative processing configurations have the following limitations. If this is not possible. also called advanced journaling. has significant advantages. For detailed procedures. Data areas can also be replicated by the data area poller process associated the user journal. • • The data group definition and data group object entries must specify the values indicated in Table 13 for critical parameters. objects configured for system journal replication. see “Configured object auditing value for data group entries” on page 89. you can specify an object auditing value within the configuration. For detailed information. It is particularly important for. For more information. the configured value can affect MIMIX performance. see “Planning for journaled IFS objects. and data queues” on page 79. Object tracking entries must exist for the objects identified by properly configured object entries. While user journal replication. • • Additional requirements for user journal replication . However. When you create data group object entries. Typically these are created automatically when the data group is started. Configuration requirements . You may need to create additional entries to achieve the desired results. Journaling must be started on both the source and target systems for the objects • 103 .The following additional requirements must be met before data areas or data queues identified by data group object entries can be replicated with user journal processes. Specifying *ALL or a generic name for the System 1 object (OBJ1) parameter will select multiple objects within the library specified for System 1 library (LIB1). See “Creating data group data area entries” on page 261. this type of replication is the least preferred and requires data group data area entries. but not limited to. When specifying objects in data group object entries. Object entries can be configured so that these object types can be replicated from journal entries recorded in the system journal (default) or in a user journal (optional). The configured object auditing value affects how MIMIX handles changes to attributes of library-based objects. consider the following: • You must have at least one data group object entry which specifies a a Process type of *INCLD. For objects configured for user journal replication.data areas and data queues For any data group object entries you create for data areas or data queues. data areas.Identifying data areas and data queues for replication Identifying data areas and data queues for replication MIMIX uses data group object entries to determine whether to process transactions for data area (*DTAARA) and data queue (*DTAQ) object types. see “Configuring data group entries” on page 241. you must decide whether it is appropriate for your environment. specify only the objects that need to be replicated. This may include entries which specify a Process type of *EXCLD. Apply session load balancing . Serialized transactions . Furthermore. is used for all data area and data queue objects are replicated from a user journal. Table 13. changes to data area and data queue content. Additionally. are recognized and supported through user journal replication. • • • Restrictions .identified by object tracking entries.If you use user exit programs that process user journal entries. User exit programs . session A. Critical configuration parameters for replicating *DTAARA and *DTAQ objects from a user journal Required Values Configuration Notes Critical Parameters Data Group Definition Data group type (TYPE) Data Group Object Entry Cooperate with database (COOPDB) Cooperating object types (COOPTYPE) *ALL *YES *DTAARA *DTAQ The appropriate object types must be specified to enable advanced journaling. and cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system.user journal replication of data areas and data queues For operating systems V5R4 and above. You may need to adjust the configuration accordingly. be aware of the following restrictions: • MIMIX does not support before-images for data updates to data areas. Other replication activity can use this apply session. Otherwise. you may need to adjust the configuration for the replicated files. data areas.When converting an existing data group to use or add advanced journaling. you may need to modify your programs. When considering replicating data areas and data queues using MIMIX user journal replication processes. as well as changes to structure (such as moves and renames) and number (such as creates and deletes). and may cause it to become overloaded. see “Planning for journaled IFS objects.If you need to serialize transactions for database files and data area or data queue objects replicated from a user journal.One database apply session. and data queues” on page 79 for additional details if any of the following apply: • Converting existing configurations . you must consider whether journals should be shared and whether data area or data queue objects should be replicated in a data group that also replicates database files. MIMIX does not provide a mechanism to prevent 104 . system journal replication results. If this occurs. use standard system journal replication methods. • The apply of data area and data queue objects is restricted to a single database apply job (DBAPYA). The ability to replicate Distributed Data Management (DDM) data areas and data queues is not supported. you should run MIMIX AutoGuard on a regular basis. this job may fall behind in the processing of journal entries. If you need to replicate DDM data areas and data queues.Identifying data areas and data queues for replication users or applications from updating replicated data areas on the target system accidentally. Pre-existing data areas and data queues to be selected for replication must have journaling started on both the source and target systems before the data group is started. • • • 105 . If a data group has too much replication activity. To guarantee the data integrity of replicated data areas between the source and target systems. you should load-level the apply sessions by moving some or all of the database files to another database apply job. The subset of E and Q journal code entry types supported for user journal replication are listed in “Journal codes and entry types for journaled data areas and data queues” on page 590. For more information. you want to limit the IFS objects you replicate to mission-critical objects. Differences in implementation details are described in “Processing variations for common operations” on page 117. and what replication path is used. IFS objects configured to be replicated from a user journal must be in the Root (‘/’) or QOpenSys file systems. The following object types are supported: • • • Directories (*DIR) Stream Files (*STMF) Symbolic Links (*SYMLNK) Table 14 identifies the IFS file systems that are not supported by MIMIX and cannot be specified for either the System 1 object prompt or the System 2 object prompt in the Add Data Group IFS Entry (ADDDGIFSE) command. also called advanced journaling. and rename operations. Objects configured for user journal replication may have create. For detailed procedures.Identifying IFS objects for replication MIMIX uses data group IFS entries to determine whether to process transactions for objects in the integrated file system (IFS). While user journal replication has significant advantages. which are used as virtual disks by IXS and IXA technology. move. One of the most important decisions in planning for MIMIX is determining which IFS objects you need to replicate. IFS entries can be configured so that the identified objects can be replicated from journal entries recorded in the system journal (default) or in a user journal (optional). Therefore. /QDLS /QFileSvr.LIB /QSR Journaling is not supported for files in Network Work Storage Spaces (NWSS). Most likely. and data queues” on page 79. see “Creating data group IFS entries” on page 255.400 /QFPNWSSTG IFS file systems that are not supported by MIMIX /QLANSrv /QNetWare /QNTC /QOPT /QSYS. Table 14. User journal replication. data areas. is well suited to the dynamic environments of IFS objects. see “Planning for journaled IFS objects. 106 . restore. you must decide whether it is appropriate for your environment. Refer to the IBM book OS/400 Integrated File System Introduction for more information about IFS. delete. Supported IFS file systems and object types The IFS objects to be replicated must be in the Root (‘/’) or QOpenSys file systems. • The root file system on the System i is generally not case sensitive. The QOpenSys file system on the System i is generally case sensitive. MIMIX may present path names as all upper case or all lower case. MIMIX preserves the character case of IFS object names. the creation of /AbCd on the source system will be replicated as /AbCd on the target system. Integrated File System Introduction V5R4. For example. you can create /AbCd or /ABCD. For example. but not both. changes to /AbCd will be replicated to /ABCD. refer to the IBM book. Long IFS path names MIMIX currently replicates IFS path names of 512 characters.Identifying IFS objects for replication Considerations when identifying IFS objects The following considerations for IFS objects apply regardless of whether replication occurs through the system journal or user journal. while the WRKDGIFSE display shows all upper case. When character case is not a concern (root file system). You must specify the correct character case when referring to an object. be aware of the following information about character case sensitivity for specifying IFS object names. /abcd. such as /AbCd. • During replication. MIMIX processing order for data group IFS entries Data group IFS entries are processed in order from most generic to most specific. any MIMIX command that takes an IFS path name as input may be susceptible to a 506 character limit. 107 . The first entry (more generic) found that matches the object is used until a more specific match is found. you can create both /QOpenSys/AbCd and /QOpenSys/ABCD. Upper and lower case IFS object names When you create data group IFS entries. However. You can refer to the object by any mix of character case. This character limit may be reduced even further if the IFS path name contains embedded apostrophes ('). For example. /AbCd and /ABCD are equivalent names. If /ABCD exists as such on the target system. Except for "QOpenSys" in a path name. subsetting WRKDGACTE by /AbCd and /ABCD will produce the same result. Names can be entered in either case. or /ABCD. but otherwise character case is ignored. For example. but the object name will not be changed to /AbCd on the target system. For example. the WRKDGACTE display shows all lower case. all characters in a path name are case sensitive. Character case is preserved when creating objects. Replication will not alter the character case of objects that already exist on the target system (unless the object is deleted and recreated). IFS entries are processed using the unicode character set. the supported IFS path name length is reduced by four characters for every apostrophe the path name contains. In the root file system. In this case. For information about IFS path name naming conventions. When character case does matter (QOpenSys file system). The configured object auditing value affects how MIMIX handles changes to attributes of IFS objects. For example. you can specify an object auditing value within the configuration. When specifying which IFS objects in data group IFS entries. Critical Parameters Data Group Definition Data group type (TYPE) Data Group IFS Entry *ALL 108 . This may include entries which specify a Process type of *EXCLD. • • • The data group definition and data group IFS entries must specify the values indicated in Table 15 identifies for critical parameters. Configuration requirements . For details. For example.The following additional requirements must be met before IFS objects identified by data group IFS entries can be replicated with user journal processes. Journaling must be started on both the source and target systems for the objects identified by IFS tracking entries. MIMIX presents path names in the appropriate case. • • Additional requirements for user journal replication . Configured object auditing value for IFS objects When you create data group IFS entries. For IFS objects configured for user journal replication. specify only the IFS objects that need to be replicated. You may need to create additional entries to achieve the desired results. Typically these are created automatically when the data group is started. if that is the actual object path. Critical configuration parameters for replicating IFS objects from a user journal Required Values Configuration Notes Table 15. For detailed information. You can specify an object auditing value within the configuration. objects configured for system journal replication. see “Configured object auditing value for data group entries” on page 89. subsetting the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd. It is particularly important for. see “Configured object auditing value for data group entries” on page 89. consider the following: • You must have at least one data group IFS entry which specifies a a Process type of *INCLD. The System 1 object (OBJ1) parameter selects all IFS objects within the path specified. the WRKDGACTE display and the WRKDGIFSE display would show /QOpenSys/AbCd. the configured value can affect MIMIX performance. IFS tracking entries must exist for the objects identified by properly configured IFS entries. but not limited to.IFS objects For any data group IFS entry you create. Names must be entered in the appropriate character case. As such. *NO. such as an IFS object. you may need to modify your programs. you may need to adjust the configuration for the replicated files. be aware of the following restrictions: • The operating system does not support before-images for data updates to IFS objects. session A. The apply of IFS objects is restricted to a single database apply job (DBAPYA). MIMIX cannot perform data integrity checks on the target system to ensure that data being replaced on the target system is an exact match to the data replaced on the source system. Other replication activity can use this apply session. this job may fall behind in the processing of journal entries. For journaled IFS objects. • • • Restrictions .If you use user exit programs that process user journal entries. Serialized transactions . results in system journal replication Critical Parameters Cooperate with database (COOPDB) Additionally. User exit programs . If a data group has too much replication activity. A physical object. Typically. is used for all IFS objects that are replicated from a user journal. and data queues” on page 79 for additional details if any of the following apply: • Converting existing configurations . • • • 109 .If you need to serialize transactions for database files and IFS objects replicated from a user journal. is identified by a hard link. you must consider whether journals should be shared and whether IFS objects should be replicated in a data group that also replicated database files.user journal replication of IFS objects When considering replicating IFS objects using MIMIX user journal replication processes. and may cause it to become overloaded. see “Planning for journaled IFS objects. an unlimited number of hard links can be created as identifiers for one object. You may need to adjust the configuration accordingly. Apply session load balancing . MIMIX will check the integrity of the IFS data through the use of regularly scheduled audits. Critical configuration parameters for replicating IFS objects from a user journal Required Values *YES Configuration Notes The default.One database apply session. data areas.Identifying IFS objects for replication Table 15.When converting an existing data group to use or add advanced journaling. Pre-existing IFS objects to be selected for replication must have journaling started both the source and target systems before the data group is started. MIMIX does not support the replication of additional hard links because doing so causes the same FID to be used for multiple names for the same IFS object. specifically the #IFSATR audit. If this occurs. you should load-level the apply sessions by moving some or all of the database files to another database apply job. Because this option does not remove the associated tracking entries. orphan tracking entries can accumulate on the system. The subset of B journal code entry types supported for user journal replication are listed in “Journal codes and entry types for journaled IFS objects” on page 590. The ability to use the Remove Journaled Changes (RMVJRNCHG) command for removing journaled changes for IFS tracking entries is not supported.• The ability to “lock on apply” IFS objects in order to prevent unauthorized updates from occurring on the target system is not supported when advanced journaling is configured. It is recommended that option 14 (Remove related) on the Work with Data Group Activity (WKRDGACT) display not be used for failed activity entries representing actions against cooperatively processed IFS objects. • • • 110 . document name. see “Configured object auditing value for data group entries” on page 89. and an include or exclude indicator. A/B/C/D). the lowest path element is removed and the process is repeated. For example. Each DLO entry for a data group includes a folder path. When you create data DLO object entries. For detailed procedures. an object auditing level. If a second data group DLO entry with a folder path of “ACCOUNT/J*” were added. owner. MIMIX supports generic names for DLOs. it would take precedence because it is more specific.Identifying DLOs for replication Identifying DLOs for replication MIMIX uses data group DLO entries to determine whether to process system journal transactions for document library objects (DLOs). If there is still no match. a data group DLO entry with a folder path of “ACCOUNT” would also apply to a transaction for a document in folder path ACCOUNT/JANUARY. In a data group DLO entry. The data group DLO entries are checked from the most specific to the least specific. A/B/C/D is reduced to A/B/C and is rechecked. you can specify an object auditing value within the configuration. Matching order for document names Folder Path Exact Exact Exact Document Name Exact Exact Generic* Owner Exact *ALL Exact Search Order 1 2 3 111 . the folder path and document can be generic or *ALL. In addition to specific names. then the owner. The folder path is the most significant search element. For a folder path with multiple elements (for example. followed by the document name. For example. The most significant match found (if any) is checked to determine whether to process the entry. This process continues until a match is found or until all elements of the path have been removed. How MIMIX uses DLO entries to evaluate journal entries for replication How items are specified within a DLO determines whether MIMIX selects or omits them from processing. Sequence and priority order for documents Table 16 illustrates the sequence in which MIMIX checks DLO entries for a match. The configured object auditing value affects how MIMIX handles changes to attributes of DLOs. see “Creating data group DLO entries” on page 259. For detailed information. If no match is found. An exact or generic folder path name in a data group DLO entry applies to folder paths that match the entry as well as to any unnamed child folders of that path which are not covered by a more explicit entry. When determining whether to process a journal entry for a DLO. Table 16. MIMIX looks for a match between the DLO information in the journal entry and one of the data group DLO entries. the exact checks and generic checks against data group DLO entries are performed on the path. This information can help you understand what is included or omitted. then checks for folder path *ALL are performed. or in a child folder of that path. If SMITHA owned ACCOUNTS in FINANCE1.Table 17 illustrates some sample data group DLO entries. A transaction for document ACCOUNTS in FINANCE1 owned by JONESB would be replicated because it matches entry 4. Folders are replicated based on whether there are any data group DLO entries with a process type of *INCLD that would require the folder to exist on the target system. the transaction would be blocked by entry 3. documents LEDGER. arranged in order from most to least specific Folder Path FINANCE1 FINANCE1 FINANCE1 FINANCE1 FINANCE2/Q1 FIN* Document PAYROLL LEDGER* *ALL *ALL *ALL *ALL Owner *ALL *ALL SMITHA *ALL *ALL *ALL Process Type *EXCLD *EXCLD *EXCLD *INCLD *INCLD *EXCLD Sequence and priority order for folders Folders are treated somewhat differently than documents.Table 16. Matching order for document names Folder Path Exact Exact Exact Generic* Generic* Generic* Generic* Generic* Generic* *ALL *ALL *ALL *ALL *ALL *ALL Document Name Generic* *ALL *ALL Exact Exact Generic* Generic* *ALL *ALL Exact Exact Generic* Generic* *ALL *ALL Owner *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Exact *ALL Search Order 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Document example . such as FINANCE2/Q1/FEB. However. If a folder needs to exist to satisfy the folder path of an include entry. For example. transactions for documents in FINANCE2/Q1. would be replicated because of entry 5. Likewise.JUL and LEDGER. Table 17. the folder will be replicated even if a different exclude entry prevents replication of the contents of the folder.AUG in FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1 would be blocked by entry 1. Entry 1 2 3 4 5 6 Sample data group DLO entries. a transaction for any document in a folder named FINANCE would be blocked from replication because it matches entry 6. 112 . A transaction for any document in FINANCE2 would be blocked by entry 6. Identifying DLOs for replication There is one exception to the requirement of replicating folders to satisfy the folder path for an include entry. Entry 1 2 3 4 5 Sample data group DLO entries. A transaction for folder FINANCE1 would be replicated because of entry 4. Note that the exception holds true even if ACCOUNT is owned by user profile JONESB (entry 4) because the more specific folder name match takes precedence. A folder will not be replicated when the only include entry that would cause its replication specifies *ALL for its folder path and the folder matches an exclude entry with an exact or a generic folder path name. a transaction for a folder named FINANCE would be blocked from replication because it matches entry 6. Table 17 and Table 18 illustrate the differences in matching folders to be replicated. The exception also affects all child folders in the ACCOUNT folder path. The exception does not apply because entry 1 does not specify document *ALL. and an owner of *ALL. above. In Table 17. only FINANCE2 itself must exist to satisfy entry 5. ACCOUNT matches an exclude entry with an exact folder path. This is because of the exception described above. a transaction for folder ACCOUNT would be blocked from replication because it matches entry 2. In Table 18. Note that any transactions for documents in FINANCE2 or any child folders other than those in the path that includes Q1 would be blocked by entry 6. This would also affect all folders within FINANCE. and the only include entry that would cause it to be replicated specifies folder path *ALL. Entry 5 requires that ACCOUNT2 exist on the target system to satisfy the folder path requirements for document names other than LEDGER* and for child folders of ACCOUNT2. a transaction for folder FINANCE2 would be replicated because of entry 5. document value of *ALL. folder example Folder Path ACCOUNT2 ACCOUNT *ALL *ALL *ALL Document LEDGER* *ALL ABC* *ALL *ALL Owner *ALL *ALL *ALL JONESB *ALL Process Type *EXCLD *EXCLD *INCLD *INCLD *INCLD A transaction for folder ACCOUNT2 would be replicated even though it is an exact path name match for exclude entry 1. 113 . Likewise. Table 18. a document value of *ALL and an owner of *ALL. although entry 5 is an include entry. MIMIX replicates the create operation. Newly created files When newly created *FILE objects are implicitly journaled and are eligible for replication.MIMIX Dynamic Apply When a data group definition meets configuration requirements for MIMIX Dynamic Apply and data group object and file entries are properly configured. see topic ‘Notifications for newly created objects’ in the Using MIMIX book. Replication 114 . Optionally. The following briefly describes the events that occur for newly created files on the source system which are configured for MIMIX Dynamic Apply: • • • System journal replication processes ignore the creation entry. For more information about requirements and restrictions for implicit starting of journaling as well as examples of how MIMIX determines whether to replicate a new object. The MMNFYNEWE monitor is a shipped journal monitor that watches the security audit journal (QAUDJRN) for newly created libraries. When properly configured. the monitor will automatically start with the master monitor. User action is required to enable this monitor on the source system within your MIMIX environment. Configurations that replicate files. Processing variations exist based on how the data group and the data group entry with the most specific match to the object are configured. New file processing . see “What objects need to be journaled” on page 294.Processing of newly created files and objects Your production environment is dynamic. User journal replication processes create the file on the target system. This monitor is shipped disabled. new files created on the source system that are eligible for replication will be re-created on the target system by MIMIX. When a configuration enables journaling to be implicitly started on new objects. folders. the replication processes used depend on how the data group definition is configured and how the data group entry with the most specific match to the file is configured. data areas. Once enabled. These variations are described in the following subtopics. MIMIX can also notify you of newly created objects not eligible for replication so that you can choose whether to add them to the configuration. MIMIX automatically recognizes entries in the user journal that identify new create operations and replicates any that are eligible for replication. a newly created object is already journaled. User journal replication processes dynamically add a file entry for a file when a file create is seen in the user journal. or directories that are not already included or excluded for replication by a data group and sends warning notifications when its conditions are met. or IFS objects from user journal entries require journaling to be started on the objects before replication can occur. knowing that user journal replication processes will get a create entry as well. The file entry is added with a status of *ACTIVE. For more information about the conditions that are checked. When the journaled object falls within the group of objects identified for replication by a data group. New objects continue to be created after MIMIX is configured and running. data queues. The status of the file entry changes to *ACTIVE. newly created objects that are eligible for replication are automatically replicated. then communicates with user journal replication processes to issue a release wait request against the file. and data queues. and data queues When journaling is implicitly started for IFS objects. there are some situations where constraints are added dynamically between two files already assigned to different apply sessions. The file entry is added with the status of *HLD.legacy cooperative processing When a data group definition meets configuration requirements for legacy cooperative processing and data group object and file entries are properly configured. changes. data areas. • All subsequent file changes including moves or renames. In the case of cascading constraints. Note: Non-journaled objects are replicated through the system journal. For MIMIX Dynamic Apply configurations. MIMIX configurations can be enabled to permit the automatic start of journaling for newly created data areas and data queues in libraries journaled 115 . System journal replication processes save the created file. For data areas and data queues. data areas. and removes). • • • • Newly created IFS objects. to ensure that the modification is done. file changes. The following briefly describes the events that occur when files are created that have been defined for legacy cooperative processing: • System journal replication processes communicate with user journal replication processes to add a data group file entry for the file (ADDDGFE command). and file deletes are replicated through the user journal. whether the constraint is enabled or disabled. the constraint may need to be disabled to avoid the constraint violations. However. member operations (adds. automatic journaling of new *DTAARA or *DTAQ objects is supported. where a modification to one file cascades operations to related files. and then makes the file active. restores it on the target system. A user journal transaction is created on the source system and is transferred to the target system to dynamically add the file to active user journal processes. member data updates. authority changes. Configuration values specified in the data group IFS entry or object entry that most specifically matches the new object determines what replication processes are used. The database apply process waits for the save point in the journal. MIMIX will always attempt to apply the cascading entries.Processing of newly created files and objects proceeds normally after the file has been created. In this case. The status of the file entry changes to *RLSWAIT. This eliminates the possibility of constraint violations that would otherwise occur if apply sessions processed the files independently. MIMIX always attempts to place files that are related due to referential constraints into the same apply session. • New file processing . files created on the source system will be saved and restored to the target system by system journal replication processes. Journaling on the file is started if it is not already active. 2. The user journal entries contain all the information necessary for replication without needing to retrieve information from the object on the source system. 116 . see “What objects need to be journaled” on page 294. On the Data Group Activity Entries (WRKDGACTE) display. If the specified values in data group entry that identified the object as eligible for replication do not allow the object type to be cooperatively processed. do the following: 1. For requirements for implicitly starting journaling on new objects. If the object is not journaled to the user journal. Determining how an activity entry for a create operation was replicated To determine whether a create operation of a given object is being replicated through user journal processes or through system journal processes. If *YES appears for an activity entry representing a create operation. the create of the object and subsequent operations are replicated through system journal processes. If the object is journaled to the user journal. the create timestamp (*CRTTSP) attribute may differ between the source and target systems. Create operations have a value of T-CO in the Code column. MIMIX user journal replication processes can fully replicate the create operation. the create operation is being replicated through the user journal.to a user journal. New MIMIX installations that are configured for MIMIX Dynamic Apply of files automatically have this behavior. check the value of the Requires container send field. the create operation is being replicated through the system journal. then the create of the object is processed with system journal processing. If *NO appears in the field. On the resulting details display. MIMIX creates a tracking entry for the newly created object and an activity entry representing the T-CO (create) journal entry. When MIMIX replicates a create operation through the user journal. Use option 5 (Display) next to an activity entry for a create operation. 3. locate the entry for a create operation that you want to check. user journal replication also offers full support of these operations for data area and data queue objects. and restores. or a combination of both journals. and move and rename operations. deletes. the customer is assumed not to care if the target object is replicated. Advanced journaling (user journal replication of data areas. Current object move actions New Name or Location Within name space of objects to be replicated Excluded from or not identified for replication Within name space of objects to be replicated Excluded from or not identified for replication MIMIX Action on Target System Create Object 1 Delete Object 2 Move Object None Original Source Object Excluded from or not identified for replication Identified for replication Identified for replication Excluded from or not identified for replication 1. user journal. user journal replication offers full support of create. For IFS objects. since it is not defined with an Include entry. To ensure the integrity of the target (backup) system. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry. The variations are based on the configuration of the data group entry used for replication. Table 19. data queues and IFS objects). MIMIX uses system journal replication processes DLOs and for IFS objects and library-based objects which are not explicitly identified for user journal replication. renames. The Action column indicates the operation that MIMIX will attempt on the target system. Further.Processing variations for common operations Processing variations for common operations Some variation exists in how MIMIX performs common operations such as moves.system journal replication Table 19 describes how MIMIX processes a move or rename journal entry from the system journal. it is not guaranteed that an object with the same name exists on the backup system or that it is really the same object as on the source system. so deleting the object is the most straight forward approach. If the target object is not defined to MIMIX or if it is defined by an Exclude entry. delete. Configurations specify whether these operations are processed through the system journal. In environments using V5R4 and higher operating systems. Move/rename operations . 2. and MIMIX Dynamic Apply utilize both journals. there is no guarantee that the target library exists on the target system. legacy cooperative processing. however MIMIX Dynamic Apply primarily processes through the user journal. The Original Source Object and New Name or Location columns indicate whether the object is identified within the name space for replication. restore. a copy of the source object must be brought over from the source system. 117 . If the object is a library or directory. If the new location or new name on the source system remains within the set of objects identified as eligible for replication. Not identified for replication Within name space of objects to be replicated with user journal processing 118 .Move/rename operations . Creates tracking entry for the object using the new name or location. MIMIX may use a create or delete operation and may need to add or remove tracking entries. The object will no longer be replicated. See example 2. data area. data queues. When a move or rename operation starts with or results in an object that is not within the name space for user journal replication. MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication. See example 3. Table 20. Synchronizes all of the objects identified by these new tracking entries. Source object Identified for replication with user journal processing Not identified for replication Identified for replication with user journal processing Identified for replication with user journal processing None. MIMIX creates tracking entries for those objects within the library or directory that are also within name space for user journal replication and synchronizes those objects.user journaled data areas. MIMIX actions when processing moves or renames of objects when user journal replication processes are involved New name or location Within name space of objects to be replicated with user journal processing Not identified for replication Not identified for replication Within name space of objects to be replicated with system journal processing Within name space of objects to be replicated with user journal processing MIMIX action Moves or renames the object on the target system and renames the associated tracking entry. MIMIX will perform the move or rename operation on the object on the target system. MIMIX may need to perform additional operations in order to replicate the operation. Moves or renames the object using system journal processes and removes the associated tracking entry. Each row in Table 20 summarizes a move/rename scenario and identifies the action taken by MIMIX. See example 6. See example 4. See example 5. Identified for replication with system journal processing Creates tracking entry for the object using the new name or location and moves or renames the object using user journal processes. Deletes the target object and deletes the associated tracking entry. If the object is a library or directory. See example 1. IFS objects IFS. and data queue objects replicated by user journal replication processes can be moved or renamed while maintaining the integrity of the data. The object is not eligible for replication. the old name is eligible for replication. and that the IFS stream file /TEST/stmf1 was renamed to /TEST/stmf2. the old and new names fall within advanced journaling name space. For example. and source IFS objects for examples Data Group IFS Entries /TEST/STMF* /TEST/DIR* Source System IFS Objects in Name Space /TEST/stmf1 /TEST/dir1/doc1 Associated Data Group IFS Tracking Entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1 system journal replication /TEST/NOTAJ* /TEST/notajstmf1 /TEST/notajdir1/doc1 Configuration Supports advanced journaling advanced journaling Example 1. moves/renames outside name space: When MIMIX encounters a journal entry for a source system object outside of the name space that has been renamed or moved to another location also outside of the name space. as indicated in Table 20. MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/dir2. Results of move/rename operations within name space for advanced journaling Resulting data group IFS tracking entries /TEST/stmf2 /TEST/dir2 /TEST/dir2/doc1 Resulting Target IFS objects /TEST/stmf2 /TEST/dir2/doc1 Example 2. The rename operations are replicated and names are changed on the target system objects. The resulting changes on the target system objects and MIMIX configuration are shown in Table 22. Table 22. The tracking entries for these objects are also renamed. In both cases. Table 21. moves/renames within advanced journaling name space: The most common move and rename operations occur within advanced journaling name space. Initial data group IFS entries. The MIMIX behavior described is the same as that for data areas and data queues that are within the configured name space for advanced journaling. Thus. MIMIX is aware of only the original names. MIMIX ignores the transaction. moves/renames from advanced journaling name space to outside name space: In this example. Example 3. IFS tracking entries. 119 . and IFS tracking entries before the move/rename operation occurs.Processing variations for common operations The following examples use IFS objects and directories to illustrate the MIMIX operations in move/rename scenarios that involve user journal replication (advanced journaling). data group IFS entries. MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/xdir1 and IFS stream file /TEST/stmf1 was renamed to /TEST/xstmf1. Table 21 identifies the initial set of source system objects. as indicated in Table 20. Results of move/rename operations from system journal to advanced journaling name space Resulting data group IFS tracking entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1 Resulting target IFS objects /TEST/stmf1 /TEST/dir1/doc1 Example 6. Example 4. Table 23 shows these results. The objects identified by these tracking entries are individually synchronized from the source to the target system. Table 24. MIMIX deletes the IFS directory and IFS stream file from the target system. MIMIX is aware that the old names are within the system journal name space and that the new names are within the advanced journaling name space. moves/renames from outside to within advanced journaling name space: In this example MIMIX encounters journal entries indicating that the source system IFS directory /TEST/xdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/xstmf1 was renamed to /TEST/stmf1. MIMIX removes the tracking entries associated with the original names and performs the rename operation for the objects on the target system. The original names are outside of the name space and are not eligible for replication. As a result. Table 24 illustrates the results on the target system. MIMIX encounters user journal entries indicating that the source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). Table 23.but the new name is not. moves/renames from advanced journaling to system journal name space: In this example. MIMIX also deletes the associated IFS tracking entries. the new names are within 120 . the new names fall within the name space for replication through the system journal. MIMIX treats this as a delete operation during replication processing. MIMIX encounters journal entries indicating that source system IFS directory from /TEST/notajdir1 was renamed to /TEST/dir1 and that IFS stream file /TEST/notajstmf1 was renamed to /TEST/stmf1. However. MIMIX is aware that both the old names and new names are eligible for replication as indicated in Table 20. moves/renames from system journal to advanced journaling name space: In this example. However. MIMIX creates tracking entries for the names and then performs the rename operation on the target system using advanced journaling. Results of move/rename operations from advanced journaling to system journal name space Resulting data group IFS tracking entries (removed) (removed) Resulting target IFS objects /TEST/notajstmf1 /TEST/notajdir1/doc1 Example 5. • • • Delete operations . If the dynamic update option is not used.Processing variations for common operations the name space for advanced journaling as indicated in Table 20. or data queue object is restored. The transaction is transferred dynamically.files configured for legacy cooperative processing The following briefly describes the events that occur in MIMIX when a file that is defined for legacy cooperative processing is deleted: • System journal replication processes communicate with user journal replication processes that a file has been deleted on the source system and indicates that the file should be deleted from the target system. With user journal replication. MIMIX system journal replication processes generate an activity entry representing the delete operation and handle the delete of the object from the target system. Table 25. data area.user journaled data areas. See “Newly created files” on page 114. Restore operations . data area. MIMIX processes the operations as creates during replication. MIMIX system journal replication processes delete the file on the target system. and data queue objects on the source system are 121 . Table 25 illustrates the results. Because the objects were not previously replicated. IFS objects When an IFS. If the data group file entry is set to use the option to dynamically update active replication processes. A journal transaction which identifies the deleted file is created on the source system.user journaled data areas. the file and associated file entry will be dynamically removed from the replication processes. The objects identified by these tracking entries are individually synchronized from the source to the target system. the pre-existing object is replaced by a backup copy on the source system. IFS objects When a T-DO (delete) journal entry for an IFS. data queues. restores of IFS. the data group changes are not recognized until all data group processes are ended and restarted. data queues. or data queue object is encountered in the system journal. The user journal replication processes remove the corresponding tracking entry. MIMIX also creates tracking entries for any objects that reside within the moved or renamed IFS directory (or library in the case of data areas or data queues). data area. Results of move/rename operations from outside to within advanced journaling name space Resulting data group IFS tracking entries /TEST/stmf1 /TEST/dir1 /TEST/dir1/doc1 Resulting target IFS objects /TEST/stmf1 /TEST/dir1/doc1 Delete operations . supported through cooperative processing between MIMIX system journal and user journal replication processes. Provided the object was journaled when it was saved. or data queue object match the data group definition. During cooperative processing. or data queue object. or data queue object is also journaled . data area. Meanwhile. data area. or end and restart journaling on the object so that the journaling characteristics of the IFS. data area. 122 . user journal replication processes handle the management of the corresponding IFS or object tracking entry. system journal replication processes generate an activity entry representing the T-OR (restore) journal entry from the system journal and perform a save and restore operation on the IFS. a restored IFS. MIMIX may also start journaling. IFS objects to user journaling” on page 136 changes the configuration of an existing data group to use user journal replication processes for these objects. advanced techniques. and data groups that make up the replication environment. controllers. “Checklist: Change *DTAARA. For additional information see “Configuring advanced replication techniques” on page 320. • “Checklist: New remote journal (preferred) configuration” on page 125 uses shipped default values to create a new installation. Also. Unless you explicitly configure them otherwise. • • Upgrades and conversions: You can use any of the following topics. such as keyed replication. use topic ‘Choosing the correct checklist for MIMIX for MQ’ in the MIMIX for IBM WebSphere MQ book. as appropriate. New data groups will use MIMIX source-send processes in user journal replication. *DTAQ. journals. new data groups will use the IBM i remote journal function as part of user journal replication processes. system-level configuration for communications (lines. To configure a new installation that is to use the integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ). have additional configuration requirements.CHAPTER 5 Configuration checklists MIMIX can be configured in a variety of ways to support your replication needs. New installations: Before you start configuring MIMIX. to change a configuration: • • “Checklist: Converting to remote journaling” on page 131 changes an existing data group to use remote journaling within user journal replication processes. For available options. Definitions identify systems. “Checklist: Converting to legacy cooperative processing” on page 138 changes the configuration of an existing data group so that logical and physical source files • • • 123 . refer to the MIMIX for IBM WebSphere MQ book. “Converting to MIMIX Dynamic Apply” on page 133 provides checklists for two methods of changing the configuration of an existing data group to use MIMIX Dynamic Apply for logical and physical file replication. Data groups that existed prior to installing version 5 must use this information in order to use MIMIX Dynamic Apply. “Checklist: New MIMIX source-send configuration” on page 128 configures a new installation and is appropriate when your environment cannot use remote journaling. Choose one of the following checklists to configure a new installation of MIMIX. Each configuration requires a combination of definitions and data group entries. IP interfaces) must already exist between the systems that you plan to include in the MIMIX installation. Data group entries identify what to replicate and the replication option to be used. To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an existing installation. see “Replication choices by object type” on page 88. communications. Configuration checklists are processed from the system journal and physical data files use legacy cooperative processing. Other checklists: The following configuration checklist employs less frequently used configuration tools and is not included in this chapter. • Use “Checklist: copy configuration” on page 509 if you need to copy configuration data from an existing product library into another MIMIX installation. 124 . If you are using the TCP protocol. If you are using the TCP protocol. If you have TCP configured and plan to use it for your transfer protocol. a. If communications is not configured. If your transfer definitions prevent this. 8. Use topic “Checking DDM password validation level in use” on page 280. ensure that the Lakeview TCP server is running on each system defined in the transfer definition. configuration information for data groups will be automatically replicated to the other system as you create it. Create transfer definitions to define the communications protocol used between pairs of systems. Verify that the communications link defined in each transfer definition is operational using topic “Verifying a communications link for system definitions” on page 173. Use topic “Creating a transfer definition” on page 163. use topic “Starting the TCP/IP server” on page 168. To configure your system manually. When the system manager is running. 9. Create system definitions for the management system and each of the network systems for the MIMIX installation. 6. ensure that the DDM TCP server is running using topic “Starting the DDM TCP/IP server” on page 279. perform the following steps on the system that you want to designate as the management system of the MIMIX installation: 1. 4. Note: Default values for transfer definitions enable MIMIX to create and manage autostart job entries for the server. 125 . you can create and manage your own autostart job entries. If the Lakeview TCP server is not active on a system. 7. Use topic “Creating system definitions” on page 150. If you have implemented DDM password validation. 3. b. refer to “System-level communications” on page 140 for more information. A pair of systems consists of a management system and a network system. Start the MIMIX managers using topic “Starting the system and journal managers” on page 269. 2. Communications between the systems must be configured and operational before you start configuring MIMIX.Checklist: New remote journal (preferred) configuration Checklist: New remote journal (preferred) configuration Use this checklist to configure a new installation of MIMIX. 5. The referenced topic creates a data group definition with appropriate values to support MIMIX Dynamic Apply. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. This checklist creates the preferred configuration that uses IBM i remote journaling and uses MIMIX Dynamic Apply to cooperatively process logical and physical files. Create the data group definitions that you need using topic “Creating a data group definition” on page 221. For more information see “Using autostart job entries to start the TCP server” on page 169. verify that is it is operational using the PING command. verify that your environment will allow MIMIX RJ support to work properly. Planning and Requirements Information “Identifying library-based objects for replication” on page 91 “Identifying logical and physical files for replication” on page 96 “Identifying data areas and data queues for replication” on page 103 3. Do the following to confirm and automatically correct any problems found in file entries associated with data group object entries: a. For information. temporarily change the Action for running audits policy using the following command: SETMMXPCY DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR) b. For other object types or classes. “Tips for journal definition parameters” on page 179. If the data group is switchable. be sure to build the journaling environments for both directions--source system A to target system B (target journal @R) and for source system B to target system A (target journal @R). you should still create file entries for PF data files to ensure that legacy cooperative processing can be used. After creating object entries. Note: If you cannot use MIMIX Dynamic Apply for logical files or PF data files. and “Journal definition considerations” on page 184. load IFS tracking entries for IFS objects to be replicated from a user journal. any replication options identified in planning topic “Replication choices by object type” on page 88 are supported. After creating object entries. Confirm that the journal definitions which have been automatically created have the values you require. From the source system. 11. load file entries for LF and PF (source and data) *FILE objects using “Loading file entries from a data group’s object entries” on page 247. Table 26. 12. see “Journal definitions created by other processes” on page 178. Use “Loading IFS tracking entries” on page 257. After creating IFS entries. 2. “Identifying IFS objects for replication” on page 106 DLOs “Identifying DLOs for replication” on page 111 13. Use Table 26 to create data group entries for this configuration. This configuration requires object entries and file entries for LF and PF files. type WRKAUD RULE(#DGFE) and press Enter. IFS objects 1. Do the following: 1. Create DLO entries using “Creating data group DLO entries” on page 259.10. Build the necessary journaling environments for the RJ links using “Building the journaling environment” on page 195. Create IFS entries using “Creating data group IFS entries” on page 255. Create object entries using. Use “Loading object tracking entries” on page 258. From the management system. Use“Creating data group object entries” on page 242. Class Librarybased objects How to configure data group entries for the remote journal (preferred) configuration. 126 . load object tracking entries for any *DTAARA and *DTAQ objects to be replicated from a user journal. 2. 18. (The default value is *INST. • • • For user journal replication. Check the audit status for a value of *NODIFF or *AUTORCVD. f. Topic “Performing the initial synchronization” on page 442 identifies options available for synchronizing and identifies how to establish a synchronization point that identifies the journal location that will be used later to initially start replication. Start journaling using the following procedures as needed for your configuration. Doing this now ensures that objects to be replicated have the object auditing values necessary for replication and that any transactions which occur between configuration and starting replication processes can be replicated. For data areas or data queues configured for user journal replication.Checklist: New remote journal (preferred) configuration c. Then press Enter. For additional information. 19. use “Journaling for IFS objects” on page 300. Next to the data group you want to confirm. If the audit results in any other status. folders and directories contain expected objects on both systems. configured for user journal replication. d. Confirm that the systems are synchronized by checking the libraries. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on system policy prompt. For IFS objects. be sure to specify *SRC for the Start journaling on system (JRNSYS) parameter in the commands to start journaling. see “Resolving audit problems . Ensure that object auditing values are set for the objects identified by the configuration before synchronizing data between systems. e. 17.) Use the command: SETMMXPCY DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST) 14. Start the data group using “Starting data groups for the first time” on page 282.5250 emulator” on page 543 and “Interpreting results for configuration data .#DGFE audit” on page 546. From the management system. use “Journaling for physical files” on page 297 to start journaling on both source and target systems. 127 . 16. Synchronize the database files and objects on the systems between which replication occurs. use “Journaling for data areas and data queues” on page 303. 15. resolve the problem. set the Action for running audits policy to its previous value. Note: If the objects do not yet exist on the target system. Topic “Verifying the initial synchronization” on page 447 identifies the additional aspects of your configuration that are necessary for successful replication. type 9 (Run rule) and press F4 (Prompt). Use the procedure “Setting data group auditing values manually” on page 270. Verify the configuration. For more information see “Using autostart job entries to start the TCP server” on page 169.Checklist: New MIMIX source-send configuration Best practices for MIMIX are to use MIMIX Remote Journal support for database replication. Be sure to specify *NO for the Use remote journal link prompt. Start the MIMIX managers using topic “Starting the system and journal managers” on page 269. 8. 6. Confirm that the journal definitions which have been automatically created have the values you require. A pair of systems consists of a management system and a network system. If you have TCP configured and plan to use it for your transfer protocol. Create the data group definitions that you need using topic “Creating a data group definition” on page 221. ensure that the Lakeview TCP server is running on each system defined in the transfer definition. configuration information for data groups will be automatically replicated to the other system as you create it. Communications between the systems must be configured and operational before you start configuring MIMIX. System journal replication is also configured. When the system manager is running. If communications is not configured. 7. b. refer to “System-level communications” on page 140 for more information. Create transfer definitions to define the communications protocol used between pairs of systems. you can create and manage your own autostart job entries. 2. 4. However. Use topic “Creating a transfer definition” on page 163. “Tips for journal definition parameters” on page 179. 3. 5. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. use topic “Starting the TCP/IP server” on page 168. Note: Default values for transfer definition enable MIMIX to create and manage autostart job entries for the server. verify that is it is operational using the PING command. perform the following steps on the system that you want to designate as the management system of the MIMIX installation: 1. To configure a source-send environment. Verify that the communications link defined in each transfer definition is operational using topic “Verifying a communications link for system definitions” on page 173. this checklist will configure a new installation that uses MIMIX source-send processes for database replication. Create system definitions for the management system and each of the network systems for the MIMIX installation. 128 . If your transfer definitions prevent this. Use topic “Creating system definitions” on page 150. For information. If the Lakeview TCP server is not active on a system. If you are using the TCP protocol. and “Journal definition considerations” on page 184. see “Journal definitions created by other processes” on page 178. a. in cases where you cannot use remote journaling. temporarily change the Action for running audits policy using the following command: SETMMXPCY DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR) b. resolve the problem. d. see “Resolving audit problems . type WRKAUD RULE(#DGFE) and press Enter. For other object types or classes. For additional information. From the source system. Use “Loading IFS tracking entries” on page 257. Class Librarybased objects How to configure data group entries a new MIMIX source-send configuration.5250 emulator” on page 543 and “Interpreting results for configuration data . c. Do the following to confirm and automatically correct any problems found in file entries associated with data group object entries: a. Use Table 27 to create data group entries for this configuration. 10. type 9 (Run rule) and press F4 (Prompt). If the journaling environment does not exist. load object tracking entries for *DTAARA and *DTAQ objects to be replicated from a user journal. Use “Loading object tracking entries” on page 258. From the management system. 129 . Next to the data group you want to confirm. any replication options identified in planning topic “Replication choices by object type” on page 88 are supported. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on system policy prompt. load IFS tracking entries for IFS objects to be replicated from a user journal. After creating IFS entries. Then press Enter.Checklist: New MIMIX source-send configuration 9. Check the audit status for a value of *NODIFF or *AUTORCVD.#DGFE audit” on page 546. load file entries for PF (data) *FILE objects using “Loading file entries from a data group’s object entries” on page 247. After creating object entries. Planning and Requirement Information “Identifying library-based objects for replication” on page 91 “Identifying logical and physical files for replication” on page 96 “Identifying data areas and data queues for replication” on page 103 IFS objects “Identifying IFS objects for replication” on page 106 DLOs “Identifying DLOs for replication” on page 111 11. Do the following: 1. 2. e. 2. Create IFS entries using “Creating data group IFS entries” on page 255. This configuration requires object entries and file entries for legacy cooperative processing of PF data files. After creating object entries. Create object entries using “Creating data group object entries” on page 242. 3. Table 27. 1. use topic “Building the journaling environment” on page 195 to create the journaling environment. Create DLO entries using “Creating data group DLO entries” on page 259. If the audit results in any other status. Topic “Verifying the initial synchronization” on page 447 identifies the additional aspects of your configuration that are necessary for successful replication. configured for user journal replication. Topic “Performing the initial synchronization” on page 442 identifies options available for synchronizing and identifies how to establish a synchronization point that identifies the journal location that will be used later to initially start replication. 13. Confirm that the systems are synchronized by checking the libraries. For data areas or data queues configured for user journal replication. 17. (The default value is *INST. 16. 14. folders and directories contain expected objects on both systems. 130 . 15. Verify your configuration.) Use the command: SETMMXPCY DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST) 12. set the Action for running audits policy to its previous value. • • • For user journal replication. From the management system. be sure to specify *SRC for the Start journaling on system (JRNSYS) parameter in the commands to start journaling. Start the data group using “Starting data groups for the first time” on page 282. use “Journaling for data areas and data queues” on page 303. Note: If the objects do not yet exist on the target system. Synchronize the database files and objects on the systems between which replication occurs. Use the procedure “Setting data group auditing values manually” on page 270. use “Journaling for IFS objects” on page 300. Start journaling using the following procedures as needed for your configuration. Doing this now ensures that objects to be replicated have the object auditing values necessary for replication and that any transactions which occur between configuration and starting replication processes can be replicated. use “Journaling for physical files” on page 297 to start journaling on both source and target systems.f. Ensure that object auditing values are set for the objects identified by the configuration before synchronizing data between systems. For IFS objects. 3. you need to end the data group you are converting to remote journaling and start it again as follows: a. Use topic “Checking DDM password validation level in use” on page 280. 6. Perform these tasks from the MIMIX management system unless these instructions indicate otherwise. data queues. files configured for legacy processing prior to this conversion will continue to be replicated with legacy cooperative processing. Modify the data group definition as follows: a. 8. Specify *YES for the Use remote journal link prompt. and IFS objects are processed. Connect the journal definitions for the local and remote journals using “Adding a remote journal link” on page 202. From the Work with DG Definitions display. Perform a controlled end of the data group (ENDDG command). you need to verify that your environment will allow MIMIX RJ support to work properly. 7. type a 2 (Change) next to the data group you want and press Enter. Build the journaling environment on each system defined by the RJ pair using “Building the journaling environment” on page 195. Do the following to ensure that you have a functional transfer definition: a. c. This procedure also creates the target journal definition. If you are using the TCP protocol. Use topic “Changing a transfer definition to support remote journaling” on page 165. make the modifications to the program described in “Changes to startup programs” on page 278. press Enter. b. 1. Verify the communications link using “Verifying the communications link for a data group” on page 174. To make the configuration changes effective. d. When you are ready to accept the changes. For example. Modify the transfer definition to identify the RDB directory entry.Checklist: Converting to remote journaling Checklist: Converting to remote journaling Use this checklist to convert an existing data group from using MIMIX source-send processes to using MIMIX Remote Journal support for user journal replication. 131 . Refer to topic “Ending all replication in a controlled manner” in the Using MIMIX book. b. If you use a startup program. 4. ensure that the DDM TCP server is running using topic “Starting the DDM TCP/IP server” on page 279. Press Enter to see additional prompts. 2. If you have implemented DDM password validation. 5. Note: This checklist does not change values specified in data group entries that affect how files are cooperatively processed or how data areas. specifying *ALL for Process and *CNTRLD for End process. The Change Data Group Definition (CHGDGDFN) display appears. 132 . Be sure to specify *ALL for Start processes prompt (PRC parameter) and *LASTPROC as the value for the Database journal receiver and Database large sequence number prompts.b. Start data group replication using the procedure “Starting selected data group processes” in the Using MIMIX book. Requirements: Before starting. For a complete list of required and recommended IBM PTFs.Converting to MIMIX Dynamic Apply Converting to MIMIX Dynamic Apply Use either procedure in this topic to change a data group configuration to use MIMIX Dynamic Apply. As of version 5. 133 . Perform the following steps from the management system on an active data group: 1. Keyed replication cannot be present in the data group configuration. consider the following: • Any data group that existed prior to installing version 5 must use one of these procedures in order to use MIMIX Dynamic Apply. • • “Converting using the Convert Data Group command” on page 133 automatically converts a data group configuration. objects of type *FILE (LF. PF source and data) are replicated using primarily user journal replication processes. This configuration is the most efficient way to process these files. The conversion is complete when you see message LVI321A. Any data group to be converted must already be configured to use remote journaling. The data group must be active when starting the conversion. From a command line enter the command: CVTDG DGDFN(name system1 system2) 2. log in to Support Central and refer to the Technical Documents page. Any data group to be converted must have *SYSJRN specified as the value of Cooperative journal (COOPJRN). A minimum level of IBM i PTFs are required on both systems. It is recommended that you contact your Certified MIMIX Consultant for assistance before performing this procedure. This command will automatically attempt to perform the steps described in the manual procedure and will issue diagnostic messages if a step cannot be performed. “Checklist: manually converting to MIMIX Dynamic Apply” on page 134 enables you to perform the conversion yourself. The conversion must be performed from the management system. Converting using the Convert Data Group command The Convert Data Group (CVTDG) will automatically convert the configuration of specified data groups to enable MIMIX Dynamic Apply. newly created data groups are automatically configured to use MIMIX Dynamic Apply when its requirements and restrictions are met and shipped command defaults are used. In a MIMIX Dynamic Apply configuration. • • • • • For additional information about configuration requirements and limitations of MIMIX Dynamic Apply. see “Identifying logical and physical files for replication” on page 96. Watch for diagnostic messages in the job log and take any recovery action indicated. For more information. Use the command: CHGDGDFN DGDFN(name system1 system2) COOPJRN(*USRJRN) 9. See “Requirements and limitations of MIMIX Dynamic Apply” on page 101. 134 . This can be done by running the following command from the source system: SETDGAUD DGDFN(name system1 system2) OBJTYPE(*AUTOJRN) Note: The QDFTJRN data area is created in libraries identified by data group object entries which are configured for cooperatively processing of files. To ensure that new files created while the data group is inactive are automatically journaled. 8. From the management system.Checklist: manually converting to MIMIX Dynamic Apply Perform the following steps from the management system to enable an existing data group to use MIMIX Dynamic Apply: 1. 5. Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they pertain to your environment. 3. Follow the steps for “Confirming the end request completed without problems” in the Using MIMIX book. You may need to create additional entries to achieve desired results. Note: Topic “Ending a data group in a controlled manner” in the Using MIMIX book includes subtask “Preparing for a controlled end of a data group” and the other subtasks needed for Step 6 and Step 7. subject to some limitations For a list of restricted libraries and other details of requirements for implicitly starting journaling. the QDFTJRN data areas must be created in the libraries configured for replication of cooperatively processed files. Ensure that you have one or more data group object entries that specify the required values. 4. Log in to Support Central and refer to the Technical Documents page for a list of required and recommended IBM PTFs. see “Identifying logical and physical files for replication” on page 96. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. 7. These entries identify the items within the name space for replication. Refer to topic “Preparing for a controlled end of a data group” in the Using MIMIX book. Ensure that there are no open commit cycles for the database apply process. Verify that data group is synchronized by running the MIMIX audits. 6. See “Verifying the initial synchronization” on page 447. Follow the procedure for “Performing the controlled end” in the Using MIMIX book. 2. change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *USRJRN. Verify that the System Manager jobs are active. Perform a controlled end of the data group you are converting. see “What objects need to be journaled” on page 294. See “Starting the system and journal managers” on page 269. 10. Verify the environment meets the requirements and restrictions. See “Starting journaling for physical files” on page 297. 135 . From the management system. 12. Start journaling for all files not previously journaled. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system. See “Verifying the initial synchronization” on page 447. see “Loading file entries from a data group’s object entries” on page 247. Start the data group specifying the command as follows: STRDG DGDFN(name system1 system2) CLRPND(*YES) 14. Verify that data groups are synchronized by running the MIMIX audits. use the following command to load the data group file entries from the target system. LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(value) SELECT(*NO) For additional information about loading file entries. 13.Converting to MIMIX Dynamic Apply 11. *DTAQ. To convert existing data groups to use advanced journaling. 5. see “Restrictions . and data queues should be replicated in a data group shared with other objects undergoing database replication. Verify the value in the data group definition is correct. Use the command WRKDGACTE STATUS(*ACTIVE) to display any pending activity entries. including examples Database apply session balancing User exit program considerations 2. the journal definitions and journaling environment for user journal replication may not exist. Be sure to specify *YES for the Cooperate with database prompt in procedure “Adding or changing a data group IFS entry” on page 255. IFS objects to user journaling Use this checklist to change the configuration of an existing data group so that IFS objects. data areas. It also identifies the MIMIX processes used for replication and the purpose of tracking entries.Checklist: Change *DTAARA. Determine if IFS objects.) The procedure in this checklist assumes that the data group already includes user journal replication for files. 6. (This environment is also called advanced journaling. Perform a controlled end of the data groups that will include objects to be replicated using advanced journaling. Add or change data group object entries for the data areas and data queues you want to replicate using the procedure “Adding or changing a data group object 136 . Topic “Planning for journaled IFS objects. and data queues must specify *ALL as the value for Data group type (TYPE). or if these objects should be in a separate data group. The data group definitions used for user journal replication of IFS objects. *DTAARA and *DTAQ objects can be replicated from entries in a user journal. See the Using MIMIX book for how to end a data group in a controlled manner (ENDOPT(*CNTRLD)).user journal replication of IFS objects” on page 109. data areas. For additional information. data areas. Any activities that are still in progress will be listed. do the following: 1. and data queues” on page 79 provides guidelines for the following planning considerations: • • • • Serializing transactions with database files Converting existing data groups. If necessary. create the journal definitions (“Creating a journal definition” on page 192) and build the journaling environment (“Building the journaling environment” on page 195). change the value. Add or change data group IFS entries for the IFS objects you want to replicate. Note: If you have to change the Data group type. 3. If necessary. Topic “User journal replication of IFS objects. Ensure that all pending activity for objects and IFS objects has completed. data areas. 4. data queues” on page 69 describes the benefits and restrictions of replicating these objects from user journal entries. For data areas and data queues. 10. If you ever plan to switch the data groups. perform the steps in “Verifying journal receiver size options” on page 191 to ensure journaling is configured appropriately. 14. For more information about starting data groups. data areas and data queues between the source and target systems. 8. object. see “Verifying journaling for data areas and data queues” on page 305. See “User exit program considerations” on page 81. start the data groups. or file entries and starting the data group. IFS objects to user journaling entry” on page 243. use the SETDGAUD command before synchronizing data between systems. Verify that journaling is started correctly. This step is important to ensure the IFS objects. see “Restrictions . For IFS objects. data areas and data queues are actually replicated. Change any journal receiver size options necessary using “Changing journal receiver size options” on page 191. Start journaling using the following procedures as needed for your configuration. Use the procedure “Setting data group auditing values manually” on page 270. you should specify IBM i journal receiver size options that provide large journal receivers and large journal entries. For additional information. If you anticipate a delay between configuring data group IFS. 12. After IFS objects are configured. 11. Journals created by MIMIX are configured to allow maximum amounts of data. 137 . b. 7. 13. • • For IFS objects. Once you have completed the preceding steps. see the Using MIMIX book. Doing so will ensure that replicated objects are properly audited and that any transactions for the objects that occur between configuration and starting the data group are replicated. For IFS objects. If you are replicating large amounts of data. follow the Synchronize Object (SYNCOBJ) procedures. Use the procedures in “Loading tracking entries” on page 257. Refer to chapter “Synchronizing data between systems” on page 431 for additional information. For data areas and data queues. see “Verifying journaling for IFS objects” on page 302.user journal replication of data areas and data queues” on page 104. use “Starting journaling for data areas and data queues” on page 303 9. you must start journaling on both the source system and on the target system.Checklist: Change *DTAARA. Synchronize the IFS objects. follow the Synchronize IFS Object (SYNCIFS) procedures. Load the tracking entries associated with the data group IFS entries and data group object entries you configured. Journals that already exist may need to be changed. If you have database replication user exit programs. *DTAQ. use “Starting journaling for IFS objects” on page 300 For data areas or data queues. a. changes may need to be made. newly created data groups are configured for MIMIX Dynamic Apply when default values are taken and configuration requirements are met. Verify that data group is synchronized by running the MIMIX audits. Note: Topic “Ending a data group in a controlled manner” in the Using MIMIX book includes subtask “Preparing for a controlled end of a data group” and the subtask needed for Step 3. 2. This checklist does not convert user journal replication processes from using remote journaling to MIMIX source-send processing. End the data group you are converting by performing a controlled end. Follow the procedure for “Performing the controlled end” in the Using MIMIX book. 138 . use this checklist to change the configuration of an existing data group so that user journal replication (MIMIX Dynamic Apply) is no longer used. 4. See “Verifying the initial synchronization” on page 447. consider the following: • • • As of version 5. change the data group definition so that the Cooperative journal (COOPJRN) parameter specifies *SYSJRN. This checklist changes the configuration so that physical data files can be processed using legacy cooperative processing. LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE) UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO) For additional information about loading file entries. The configuration of any other *DTAARA. Refer to topic “Preparing for a controlled end of a data group” in the Using MIMIX book. Perform the following steps to enable legacy cooperative processing and system journal replication: 1. Save the data group file entries to an outfile.Checklist: Converting to legacy cooperative processing If you find that you cannot use MIMIX Dynamic Apply for logical and physical files. From the management system. This checklist only affects the configuration of *FILE objects. 3. Important! Before you use this checklist. Ensure that the value you specify (*SYS1 or *SYS2) for the LODSYS parameter identifies the target system. From the management system. For more information. *DTAQ. Use the Work with Data Groups display to ensure that there are no files on hold and no failed or delayed activity entries. or IFS objects that are replicated through the user journal are not affected. see “Loading file entries from a data group’s object entries” on page 247. Logical files and physical source files will be processed using the system journal. use the following command to load the data group file entries from the target system. Use the command: WRKDGFE DGDFN(DGDFN SYS1 SYS2) OUTPUT(*OUTFILE) 6. see “Requirements and limitations of legacy cooperative processing” on page 102. Use the command: CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN) 5. 8. Start the data group specifying the command as follows: STRDG DGDFN(name system1 system2) CRLPND(*YES) 139 . Examine the data group file entries with those saved in the outfile created in Step 5. run the following command from each system: DLTDTAARA DTAARA(library/QDFTJRN) 9. Such a difference will be detected by the file attributes (#FILATR) audit. These data areas automatically start journaling for newly created files. Optional step: Delete the QDFTJRN data areas.Checklist: Converting to legacy cooperative processing 7. Any differences need to be manually updated. This may not be desired because the journal image (JRNIMG) value for these files may be different than the value specified in the MIMIX configuration. To delete these data areas. MIMIX supports the following communications protocols: • • • Transmission Control Protocol/Internet Protocol (TCP/IP) Systems Network Architecture (SNA) OptiConnect MIMIX should have a dedicated communications line that is not shared with other applications. For TCP/IP. MIMIX IntelliStart can help you determine your communications requirements.System-level communications CHAPTER 6 System-level communications This information is provided to assist you with configuring the IBM PowerTM Systems communications that are necessary before you can configure MIMIX. or users on the production system. “Configuring APPC/SNA” on page 144 describes basic requirements for SNA communications. but if your primary communications protocol is TCP/IP. you need to consider additional aspects that may affect the communications speed. The topics in this chapter include: • • • “Configuring for native TCP/IP” on page 140 describes using native TCP/IP communications and provides steps to prepare and configure your system for it. user spaces. A dedicated path will make it easier to fine-tune your MIMIX environment and to determine the cause of problems. Your Certified MIMIX Consultant can assist you in determining your communications requirements and ensuring that communications can efficiently handle peak volumes of journal transactions. 140 . this can simplify your network configuration. If you plan to use system journal replication processes. jobs. This allows users with TCP communications on their networks to use MIMIX without requiring the use of IBM ANYNET through SNA. These aspects include the type of objects being transferred and the size of data queues. it is recommended that the TCP/IP host name or interface used be in its own subnet. Using TCP/IP communications may or may not improve your CPU usage. For SNA. it is recommended that MIMIX have its own communication line instead of sharing an existing SNA device. Configuring for native TCP/IP MIMIX has the ability to use native TCP/IP communications over sockets. “Configuring OptiConnect” on page 144 describes basic requirements for OptiConnect communications and identifies MIMIX limitations when this communications protocol is used. and files defined to cooperate with user journal replication processes. you can begin the MIMIX configuration process. and follow the instructions to configure the system to use TCP/IP communications. Creating Ports. In this example. The procedure for configuring a system to use TCP/IP is documented in the information included with the IBM i software. MIMIX installations vary according to the needs of each enterprise. Preparing your system to use TCP/IP communications with MIMIX requires the following: 1. Port aliases-simple example Before using the MIMIX TCP/IP support. 2. Figure 9 shows a MIMIX installation with two network systems. Configure both systems to use TCP/IP. A more complex MIMIX installation may consist of one management system and multiple network systems. Figure 9.Configuring for native TCP/IP Native TCP/IP communications allow MIMIX users greater flexibility and provides another option in the communications available for use on their PowerTM Systems. If you need to use port aliases. b. a MIMIX installation consists of one management system and one network system. 3. Creating Ports. SC41-5430. the MIMIX installation consists of two systems. At a minimum. Refer to the IBM TCP/IP Fastpath Setup book. A large enterprise may even have multiple MIMIX installations that are interconnected. the MIMIX installation consists of three systems. you must first configure the system to recognize the feature. In this example. Figure 8 shows a simple MIMIX installation in which the management system (LONDON) and a network system (HONGKONG) use the TCP communications protocol through the port number 50410. MIMIX users can also continue to use IBM ANYNET support to run SNA protocols over TCP networks. The port identifiers used depend on the configuration of the MIMIX installations. Create the port aliases for each system using the procedure in topic “Creating port aliases” on page 143. Once the system-level communication is configured. 141 . Figure 8. do the following: a. Refer to the examples “Port aliases-simple example” on page 141 and “Port aliases-complex example” on page 142. This involves identifying the ports that will be used by MIMIX to communicate with other systems. you need to have a service table entry on each system that equates the port number to the port alias. Figure 10 shows an example of such an environment with two MIMIX installations. Figure 10. For example. Port aliases-complex example If a network system communicates with more than one management system (it participates with multiple MIMIX installations).System-level communications two of which are network systems. one for each MIMIX installation in which it participates. it must have a different port for each management system with which it communicates. the port 50410 is used to communicate between LONDON (the management system) and HONGKONG and CHICAGO (network systems). you might have a service table entry on system LONDON that defines an alias of MXMGT for port number 50410. In the LIBB cluster. if you need to use port aliases for port 50410. Creating Port Aliases. the port 50411 is used to communicate between CHICAGO (the management system for this cluster) and MEXICITY and CAIRO. In this example. The CHICAGO system has two port numbers defined. Similarly. the system CHICAGO participates in two 142 . you might have service table entries on systems HONGKONG and CHICAGO that define an alias of MXNET for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in the transfer definition. In the LIBA cluster. In both Figure 8 and Figure 9. • Do the following to create a port alias on a system: 1. you need to add port aliases for both systems in the pair on each system. Select option 21 (Configure related tables) and press Enter. You might use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for port 50411 on both CAIRO and MEXICITY. From a command line. Notes: • • Perform this procedure on each system in the MIMIX installation that will use the TCP protocol. If you need to use port aliases in an environment such as Figure 10. 2. In this example. you might use a port alias of LIBAMGT for port 50410 on LONDON and an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. 143 . such as between a management system and a network system. For example. To allow communications in both directions between a pair of systems. type the command CFGTCP and press Enter.Configuring for native TCP/IP MIMIX installations and uses a separate port for each MIMIX installation. Creating port aliases The following procedure describes the steps for creating port aliases which allow MIMIX installations to communicate through TCP/IP. If you are using more than one MIMIX installation. you need to have a service table entry on each system that equates the port number to the port alias. The Configure TCP/IP menu appears. define a different set of aliases for each MIMIX installation. You would use these port aliases in the PORT1 and PORT2 parameters on the transfer definitions. CHICAGO would require two port aliases and two service table entries. In the blank at the top of the Port column. type TCP to identify this entry as using TCP/IP communications. c. Configuring APPC/SNA Before you create a transfer definition that uses the SNA protocol. Configuring OptiConnect If you plan to use the OptiConnect protocol. Type a 1 in the Opt column next to the blank lines at the top of the list. The Work with Service Table Entries display appears. 5. Do the following: a. consult your network administrator before continuing. and device do not exist. Verify that the information shown for the alias and port is what you want. and then press Enter. 4. In the blank at the top of the Protocol column. Use the 144 . and device must exist between the systems that will be identified by the transfer definition. b. Select option 1 (Work with service table entries) and press Enter. Attention: MIMIX requires that you restrict the length of port aliases to 14 or fewer characters and suggests that you specify the alias in uppercase characters. a functioning OptiConnect line must exist between the two system that you identify in the transfer definition You can use the OptiConnect® product from IBM for all communication for most1 MIMIX processes. Vision Solutions recommends that you use the same port number or same port alias on each system in the MIMIX installation. You can page down through the list to ensure that the number is not being used by the system. enclosed in apostrophes. controller. In the blank at the top of the Service column. type a description of the port alias.System-level communications 3. controller. Note: Port alias names are case sensitive and must be unique to the system on which they are defined. a functioning SNA (APPN or APPC) line. Press Enter. d. At the Text 'description' prompt. use uppercase characters to specify the alias that the System i will use to identify this port as a MIMIX native TCP port. The Add Service Table Entry (ADDSRVTBLE) display appears. Use the IBM book OptiConnect for OS/400 to install and verify OptiConnect communications. Then you can do the following: • Ensure that the QSOC library is in the system portion of the library list. e. The Configure Related Tables display appears. For environments that have only one MIMIX installation. specify the number of an unused port ID to be associated with the alias. If a line. The port ID can be any number greater than 1024 and less than 55534 that is not being used by another application. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP communicaitons. specify *OPTI for the transfer protocol. use the CHGSYSVAL command to add this library to the system library list. • When you create the transfer definition.Configuring OptiConnect command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library is in the system portion of the library list. 1. 145 . If it not. 146 . “Changing a system definition” on page 151 provides the steps to follow for changing a system definition. It is recommended that you avoid naming system definitions based on their roles. and backup change upon switching. When you create a system definition.Configuring system definitions CHAPTER 7 Configuring system definitions By creating a system definition. production. you identify to MIMIX characteristics of IBM PowerTM Systems that participate in a MIMIX installation. MIMIX automatically creates a journal definition for the security audit journal (QAUDJRN) for the associated system. “Multiple network system considerations” on page 152 describes recommendations when configuring an environment that has multiple network systems. The topics in this chapter include: • • • • “Tips for system definition parameters” on page 147 provides tips for using the more common options for system definitions. target. This journal definition is used by MIMIX system journal replication processes. System roles such as source. “Creating system definitions” on page 150 provides the steps to follow for creating system definitions. $. You can also control the severity and type of messages that are sent to each message queue.). Context-sensitive help is available online for all options on the system definition commands. If you specify a Secondary transfer definition. the primary message queue. By default. #.Z. The system (node) will not be added to the cluster until the system manager is started the first time. Message handling (PRIMSGQ. MIMIX does not automatically create transfer definitions. it will be used by MIMIX if communications path specified by the primary transfer definition is not available. or @. MIMIX. #. The remaining characters can be alphanumeric and can contain a $. Transfer definitions (PRITRFDFN. Only one system in the MIMIX installation can be a management system. SYSMGRDLY) Two parameters define the delay times used for all journal management and system management jobs. the first character must be either A . System type (TYPE) This parameter indicates the role of this system within the MIMIX installation. SECMSGQ) MIMIX uses the centralized message log facility which is common to all MIMIX products. is located in the MIMIXQGPL library. A system can be a management (*MGT) system or a network (*NET) system. Manager delay times (JRNMGRDLY. @. The communications path and protocol are defined in the transfer definitions. System definition (SYSDFN) This parameter is a single-part name that represents a system within a MIMIX installation. a period (. SECTFRDFN) These parameters identify the primary and secondary transfer definitions used for communicating with the system. Note: In the first part of the name. You must specify *TCP as the transfer protocol. or an underscore (_). You can specify a different message queue or optionally specify a secondary message queue. The value of the journal manager delay parameter determines how often the journal manager process checks for work to perform. For MIMIX to be operational. The value of the system manager delay parameter determines how often the system manager process checks for work to perform. the transfer definition names you specify must exist. Cluster member (CLUMBR) You can specify if you want this system definition to be a member of a cluster. If you accept the default value primary for the Primary transfer definition.Tips for system definition parameters Tips for system definition parameters This topic provides tips for using the more common options for system definitions. These parameters provide additional flexibility by allowing you to identify the message queues associated with the system definition and define the message filtering criteria for each message queue. This name is a logical representation and does not need to match the system name that it represents. 147 . Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition that cluster resource services will use to communicate to the node and for the node to communicate with other nodes in the cluster. create a transfer definition by that name. For libraries created in a user ASP. The management or network role of the system affects the results of the time you specify on a system definition. all objects in the library must be in the same ASP as the library.Output queue values (OUTQ. DSKSTGLMT) Three parameters define information about MIMIX data libraries on the system. restart daily to maintain the MIMIX environment. DFTJOBD) MIMIX runs under the MIMIXOWN user profile and uses several job descriptions to optimize MIMIX processes. ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable independent ASP. The default job descriptions are stored in the MIMIXQGPL library. including the system manager and journal manager. Keep notifications (KEEPNEWNFY. You can change the time at which these jobs restart. The Disk storage limit (GB) parameter specifies the maximum amount of disk storage that may be used for the MIMIX data libraries. MIMIX system history includes the system message log. Keep history (KEEPSYSHST. KEEPACKNFY) Two parameters specify the number of days to retain new and acknowledged notifications. HOLD. COPIES) These parameters control characteristics of printed output. MGRJOBD. The Keep MIMIX data (days) parameter specifies the number of days to retain objects in the MIMIX data library. The only time this parameter should be used is in the case of an INTRA system (which is handled by the default value) or in replication environments where it is necessary to have extra MIMIX system definitions that will “switch locations” along with the switchable independent ASP. User profile and job descriptions (SBMUSR. including the container cache used by system journal replication processes. DTALIBASP. KEEPDGHST) Two parameters specify the number of days to retain MIMIX system history and data group history. and allows you to specify a MIMIX installation library that does not match the library name of the other system definitions. changing the product library is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant. and defines the ASP group (independent ASP) in which the product library exists. LPI. this parameter should only be used in replication 148 . SAVE) These parameters identify an output queue used by this system definition and define characteristics of how the queue is handled. You can hold spooled files on the queue and save spooled files after they are printed. storage limit (KEEPMMXDTA. Product library (PRDLIB) This parameter is used for installing MIMIX into a switchable independent ASP. You can keep both types of history information on the system for up to a year. The Keep new notifications (days) parameter specifies the number of days to retain new notifications in the MIMIX data library. MIMIX data library. Printing (CPI. Again. Changing the job restart time is considered an advanced technique. FORMLEN. The MIMIX data library ASP parameter identifies the auxiliary storage pool (ASP) from which the system allocates storage for the MIMIX data library. The Keep acknowledged notifications (days) parameter specifies the number of days to retain acknowledged notifications in the MIMIX data library. Due to its complexity. OVRFLW. Any MIMIX functions that generate reports use this output queue. Data group history includes time stamps and distribution history. Job restart time (RSTARTTIME) System-level MIMIX jobs. Due to its complexity. changing the ASP group is considered an advanced technique and should not be attempted without the assistance of a Certified MIMIX Consultant.Tips for system definition parameters environments involving a switchable independent ASP. 149 . Specify the names of the transfer definitions you want at the Primary transfer definition and. press F10 (Additional parameters). If you want to verify or change values for additional parameters.Creating system definitions To create a system definition. type a brief description of the system definition. 7. If you want use to a secondary message queue. do the following: a. 9. Type a 1 (Create) next to the blank line at the top of the list area and press Enter. 5. b. The Create System Definition (CRTSYSDFN) display appears. Once created. 6. 8. The Work with System Definitions display appears. Specify the appropriate value for the system you are defining at the System type prompt. To create the system definition. Verify that the value of the Cluster transfer definition is what you want. the Secondary transfer definition prompts. Specify *YES at the Cluster member prompt. If necessary. 150 . 2. 10. do the following: 1. 3. Specify a name at the System definition prompt. If the system definition is for a cluster environment. From the MIMIX Configuration Menu. change the value. 4. the name can only be changed by using the Rename System Definition command. if desired. select option 1 (Work with system definitions) and press Enter. At the Description prompt. press Enter. at the prompts for Secondary message handling specify the name and library of the message queue and values indicating the severity and the Information type of messages to be sent to the queue. Changing a system definition Changing a system definition To change a system definition. 3. Press F10 (Additional parameters) 4. To save the changes press Enter. 2. Locate the prompt for the parameter you need to change and specify the value you want. The Change System Definition (CHGSYSDFN) display appears. From the MIMIX Configuration Menu. do the following: 1. 151 . select option 1 (Work with system definitions) and press Enter. Type a 2 (Change) next to the system definition you want and press Enter. 5. Press F1 (Help) for more information about the values for each parameter. The Work with System Definitions display appears. 1=Create 2=Change 3=Copy 11=Verify communications link 13=Data group definitions 4=Delete 5=Display 6=Print 12=Journal definitions 14=Transfer definitions Cluster Member *NO *NO *NO LONDON 7=Rename Opt __ __ __ __ System _______ CHICAGO NEWYORK LONDON Type *NET *NET *MGT -Transfer DefinitionsPrimary Secondary PRIMARY PRIMARY PRIMARY *NONE *NONE *NONE Figure 12. Figure 12 shows the recommended transfer definition configuration which uses the value *ANY for both systems identified by the transfer definition. The management system (LONDON) specifies the value PRIMARY for the primary transfer definition in its system definition. The management system can communicate with the other systems using any transfer definition named PRIMARY that has a value for System 1 or System 2 that resolves to its system name (LONDON). Similarly. Work with System Definitions System: Type options. If you use a different name. it is recommended that each system definition in the environment specify the same name for the Primary transfer definition prompt. it is recommended that each system definition in the multiple network environment specifies the same name for the Secondary transfer definition prompt. This configuration is necessary for the MIMIX system managers to communicate between the management system and all systems in the network. if you use secondary transfer definitions. press Enter. The management system LONDON could also use any transfer definition that specified the name LONDON as the value for either System 1 or System 2. you need to specify that name as the value for the Primary transfer definition prompt in all system definitions in the environment. The default value for the name of a transfer definition is PRIMARY. Data groups can use the same transfer definitions that the system managers use. or they can use differently named transfer definitions.) Figure 11 shows system definitions in a multiple network system environment. Example of system definition values in a multiple network system environment. (The value of the Secondary transfer definition should be different than the value of the Primary transfer definition.Multiple network system considerations When configuring an environment that has multiple network systems. Figure 11. Example of a contextual (*ANY) transfer definition in use for a multiple network 152 . Work with Transfer Definitions System: Type options.Multiple network system considerations system environment. press Enter. 1=Create 2=Change 3=Copy 11=Verify communications link 4=Delete 5=Display 6=Print LONDON 7=Rename Opt __ ---------Definition--------Name System 1 System 2 __________ _______ ________ PRIMARY *ANY *ANY Protocol *TCP Threshold (MB) *NOMAX 153 . MIMIX can automatically use a secondary transfer definition if the path defined in your primary transfer definition is not available. If you want to be able to use different transfer protocols between a pair of systems.Configuring transfer definitions CHAPTER 8 Configuring transfer definitions By creating a transfer definition. The topics in this chapter include: • • • • “Tips for transfer definition parameters” on page 156 provides tips for using the more common options for transfer definitions. You need at least one transfer definition for each pair of systems between which you want to perform replication. • • • • 154 . you identify to MIMIX the communications path and protocol to be used between two systems. Once transfer definitions exist for MIMIX. “Starting the TCP/IP server” on page 168 provides the steps to follow if you need to start the Lakeview TCP/IP server. This topic also includes sub-task for how to changing a transfer definition when converting to a remote journaling environment. A pair of systems consists of a management system and a network system. System-level communication must be configured and operational before you can use a transfer definition. “Finding the system database name for RDB directory entries” on page 167 provides the steps to follow for finding the system database name for RDB directory entries. a transfer definition defines a communications path and protocol to be used between the two product libraries used by Intra. refer to “Configuring Intra communications” on page 514. For detailed information about configuring an Intra environment. “Creating a transfer definition” on page 163 provides the steps to follow for creating a transfer definition. “Using contextual (*ANY) transfer definitions” on page 160 describes using the value (*ANY) when configuring transfer definitions. In an Intra environment. they can be used for other functions. create a transfer definition for each protocol. such as the Run Command (RUNCMD). “Changing a transfer definition” on page 165 provides the steps to follow for changing a transfer definition. You can also define an additional communications path in a secondary transfer definition. or by other MIMIX products for their operations. If configured. “Using autostart job entries to start the TCP server” on page 169 provides the steps to configure the Lakeview TCP server to start automatically every time the MIMIX subsystem is started “Verifying a communications link for system definitions” on page 173 provides the steps to verify that the communications link defined for each system definition is operational. 155 .• “Verifying the communications link for a data group” on page 174 provides a procedure to verify the primary transfer definition used by the data group. a period (. for the HOST2 parameter.nnn. Short transfer definition name (TFRSHORTN) This parameter specifies the short name of the transfer definition to be used in generating a relational database (RDB) directory name. HOST2) These two parameters specify the host name or address of system 1 and system 2.nnn.Z). For the HOST1 parameter. or an underscore (_). the first character must be either A . the last character of the system 1 name. Context-sensitive help is available online for all options on the transfer definition commands. For the *TCP protocol the following parameters apply: • System x host name or address (HOST1. a letter (A . respectively. the special value *SYS1 indicates that the host name is the same as the name specified for System 1 in the Transfer definition parameter. see “Using contextual (*ANY) transfer definitions” on page 160.9). For more information about allowing MIMIX to resolve the system names. $.). or a single digit number (0 . It is recommended that you use the default value *GEN to generate the name. Note: In the first part of the name. 156 . For more information. a transfer definition must identify the two systems that will be used by the data group.Z. #. Transfer protocol (PROTOCOL) This parameter specifies the communications protocol to be used. If you change the protocol specified after you have created the transfer definition. four-character name if you specify to have MIMIX manage your RDB directory entries. To support replication.nnn) and can be up to 256 characters in length. The name is a mixed-case host alias name or a TCP address (nnn. see “Naming convention for remote journaling environments with 2 systems” on page 185. the last character of the system 2 name. or @. The generated name is a concatenation of the first character of the transfer definition name. It is recommended that you use PRIMARY as the name of one transfer definition. The second and third parts of the name identify two different system definitions which represent the systems between which communication is being defined. Transfer definition (TFRDFN) This parameter is a three-part name that identifies a communications path between two systems. the special value *SYS2 indicates that the host name is the same as the name specified for System 2 in the Transfer definition parameter. Similarly. The HOST parameter on the STRSVR command is limited to 80 or fewer characters. You can explicitly specify the two systems. Note: The specified value is also used when starting the Lakeview TCP Server (STRSVR command). The remaining characters can be alphanumeric and can contain a $.Tips for transfer definition parameters This topic provides tips for using the more common options for transfer definitions. or you can allow MIMIX to resolve the names of the systems. The first part of the name identifies the transfer definition. and the fourth character will be either a blank. #. @. Each protocol has a set of related parameters. The short transfer definition name must be a unique. MIMIX saves information about both protocols. For the *SNA protocol the following parameters apply: • System x location name (LOCNAME1. respectively. respectively. the default special value *PORT1 indicates that the value specified on the System 1 port number or alias (PORT1) parameter is used. for the LOCNAME2 parameter. LOCNAME2) These two parameters specify the location name or address of system 1 and system 2. the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. The default value *LOC indicates that the network identifier for the location name associated with the system is used. • • The following parameters apply for the *OPTI protocol: • System x location name (LOCNAME1. specify the alias name instead of the port number. System x network identifier (NETID1. Valid values range from 1 through 9999999. NETID2) These two parameters specify name of the network for system 1 and system 2. the special value *NETID1 indicates that the network identifier specified on the System 1 network identifier (NETID1) parameter is used. Similarly. the special value *SYS1 indicates that the location name is the same as the name specified for System 1 on the Transfer definition (TFRDFN) parameter. it is recommended that you use values between 40000 and 55500.Tips for transfer definition parameters • System x port number or alias (PORT1. The special 157 . The special value *NETATR indicates that the value specified in the system network attributes is used. respectively. The value of each parameter is the unique location name that identifies the system to remote devices. the PORT1 parameter uses the port 50410. If you configured TCP using port aliases in the service table. For the PORT2 parameter. PORT2) These two parameters specify the port number or port alias of system1 and system 2. the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter. Threshold size (THLDSIZE) This parameter is accessible when you press F10 (Additional parameters). The value of each parameter can be a 14-character mixed-case TCP port number or port alias with a range from 1000 through 55534. For the LOCNAME1 parameter. The special value *NONE indicates that the network has no name. Similarly. To avoid potential conflicts with designations made by the operating system. The Relational database (RDB) parameter also applies to *TCP protocol. for the LOCNAME2 parameter. If the file or object exceeds the threshold it is not sent. The default name is MIMIX. respectively. By default. The special value *NETATR indicates that the value specified in the system network attributes is used. the special value *SYS2 indicates that the location name is the same as the name specified for System 2 on the Transfer definition (TFRDFN) parameter. This controls the size of files and objects by specifying the maximum size of files and objects that are sent. LOCNAME2) These two parameters specify the location name or address of system 1 and system 2. The value of each parameter is the unique location name that identifies the system to remote devices. For the LOCNAME1 parameter. SNA mode (MODE) This parameter specifies the name of mode description used for communication. For the NETID2 parameter. The default value *GEN causes MIMIX to create an RDB entry and add it to the relational database. For a given port number or alias. The parameter consists of a four relational database values. • 158 . Note: For remote journaling that uses an independent ASP. where nnnnnnnnnn is the 10-character installation name. and no directory entry is generated. This parameter creates two RDB directory entries. a directory entry is generated if you use the value *ANY for only one of the systems on the transfer definition. one on each system identified in the transfer definition. If you are managing the RDB directory entries and you need to determine the system database name. When you specify the special value *NONE. If MIMIX is managing your RDB directory entries.value *NOMAX indicates that no maximum value is set. whereby MIMIX will add. and ssss is the transfer definition short name. The four elements of the relational database parameter are: • Directory entry This element specifies the name of the relational database entry. The generated name is in the format MX_nnnnnnnnnn_ssss. An autostart job entry is created on each system related to the transfer definition. specify the database name for the independent ASP. The default value *SYSDB specifies that MIMIX will determine the relational database name. or remove an autostart job entry based on changes to this transfer definition. If you specify a value for the RDB parameter. only one autostart job entry will be created regardless of how many transfer definitions use that port number or alias. change. the directory entry is not added or changed by MIMIX. This directory entry is generated for the system that is specified as something other than *ANY. refer to “Finding the system database name for RDB directory entries” on page 167. see “Using contextual (*ANY) transfer definitions” on page 160. especially for slow communication lines. For more information about the use of the value *ANY on transfer definitions. Transmitting large files and objects can consume excessive communications bandwidth and negatively impact communications performance. and a management indicator for directory entries. Note: If you use the value *ANY for both system 1 and system 2 on the transfer definition. two system database names. The shipped default is *YES. *NONE is used for the directory entry name. Relational database (RDB) This parameter is accessible when you press F10 (Additional parameters) and is valid when default remote journaling configuration is used. Each entry identifies the other system’s relational database. System 1 relational database This element specifies the name of the relational database for System 1. which identify the communications path used by the IBM i remote journal function to transport journal entries: a relational database directory entry name. Manage autostart job entries (MNGAJE) This parameter is accessible when you press F10 (Additional parameters). This determines whether MIMIX will use this transfer definition to manage an autostart job entry for starting the TCP server for the MIMIXQGPL/MIMIXSBS subsystem description. it is recommended that you limit its length to 18 characters. This parameter only affects transfer definitions for TCP protocol which have host names of 80 or fewer characters. refer to “Finding the system database name for RDB directory entries” on page 167. If they do exist. the directory entry is also changed. If you are managing the RDB directory entries and you need to determine the system database name. If the relational database directory entries do not exist. When any of the transfer definition relational database values change. Management of the relational database directory entries consists of adding. specify the database name for the independent ASP. MIMIX adds them and sets any needed system values. The special value *YES indicates that the directory entries on each system are managed by MIMIX. Note: For remote journaling that uses an independent ASP. The special value *DFT indicates that MIMIX manages the relational database directory entries only when the name is generated using the special value *GEN on the Directory entry element of this parameter. changed.Tips for transfer definition parameters • System 2 relational database This element specifies the name of the relational database for System 2. the directory entries are also deleted. and deleting the directory entries on both systems. 159 . When the transfer definition is deleted. or deleted. when the transfer definition is created. The default value *SYSDB specifies that MIMIX will determine the relational database name. as needed. • Manage directory entries This element specifies that MIMIX will manage the relational database directory entries associated with the transfer definition whether the directory entry name is specified or whether the directory entry name is generated by MIMIX. MIMIX changes them to match the values specified by the Relational database (RDB) parameter. changing. a contextual transfer definition may be an aid in configuration. they are not recommended. best practice is to use transfer definitions that identify specific system definitions in the thee-part transfer definition name. INTRA) represents three transfer definitions: • • • PRIMARY SYSA SYSA PRIMARY SYSA SYSB PRIMARY SYSA INTRA Search and selection process Data group definitions and system definitions include parameters that identify associated transfer definitions. PRIMARY *ANY *ANY When you specify *ANY in the three-part name of a transfer definition. In MIMIX source-send configurations. MIMIX uses the context of the operation to determine the fully qualified name. PRIMARY *ANY SYSA 6. one for each system definition. For example.Using contextual (*ANY) transfer definitions When the three-part name of transfer definition specifies the value *ANY for System 1 or System 2 instead system names. SYSB. PRIMARY *ANY SYSB 3. a transfer definition PRIMARY SYSA *ANY in an installation that has three system definitions (SYSA. When an operation requires a transfer definition. and you have specified *TFRDFN for the Protocol parameter on such commands as RUNCMD or VFYCMNLNK. Such a transfer definitions is called contextual transfer definition. see “Considerations for remote journaling” on page 161. MIMIX searches your system and selects those systems with a 160 . The *ANY value represents several transfer definitions. PRIMARY SYSA *ANY 4. PRIMARY SYSB *ANY 7. when starting a data group. if you create a transfer definition named PRIMARY SYSA *ANY. to derive the fully qualified transfer definition name. For more information. For remote journaling environments. MIMIX uses information from the context in which the transfer definition is called to resolve to the correct system. Although you can use contextual transfer definitions with remote journaling. the systems specified in the data group name and the specified transfer definition name. MIMIX uses information in the data group definition. For example. PRIMARY SYSA SYSB 2. PRIMARY SYSB SYSA 5. For example. If MIMIX is still unable to find an appropriate transfer definition the following search order is used: 1. This definition can be used to provide the necessary parameters for establishing communications between SYSA and any other system. If there is an Intra system definition defined. The command would look like this: CRTTFRDFN TFRDFN(PRIMARY name *ANY) TEXT('description') MIMIX Remote Journal support requires that each transfer definition that will be used has a relational database (RDB) directory entry to properly identify the remote system. (PRIMARY SYSA SYSB). An RDB directory entry cannot be added to a transfer definition using the value *ANY for the remote system. the transfer definition can be used for replication from either direction. For example. By specifying both systems. the value *ANY can be used for the system where the local journal (source) resides. For example. where name identifies the system definition for the system where the remote journal (target) resides. A transfer definition of PRIMARY *ANY name is also valid. using the naming conventions for contextual systems.Using contextual (*ANY) transfer definitions transfer definition that matches the transfer definition that you specified. a transfer definition of PRIMARY name *ANY is valid in a remote journaling environment. For an Intra environment. Considerations for remote journaling Best practice for a remote journaling environment is to use a transfer definition that identifies specific system definitions in the thee-part transfer definition name. the command would look like this: CRTTFRDFN TFRDFN(PRIMARY *ANY *ANY) TEXT('Recommended configuration') Note: Ensure that you consult with your site TCP administrator before making these changes. This value can be either the second or third parts of the three-part name. Considerations for MIMIX source-send configurations When creating a transfer definition for a MIMIX source-send configuration that uses contextual system capability (*ANY) and the TCP protocol. To support a switchable data group when using contextual transfer definitions. The following is an example of an additional transfer definition that uses port number 42345 to establish communications with the Intra system: CRTTFRDFN TFRDFN(PRIMARY *ANY INTRA) PORT2(42345) TEXT('Recommended configuration') 161 . for example. For example. If you do use a contextual transfer definition in a remote journaling environment. you would need a transfer definition named PRIMARY NEWYORK *ANY as well as a transfer definition named PRIMARY CHICAGO *ANY. the transfer definition must specify a unique port number to communicate with Intra. an environment with systems NEWYORK and CHICAGO. take the default values for other parameters on the CRTTFRDFN command. each system in the remote journaling environment must be defined by a contextual transfer definition. an additional transfer definition is needed. However. For example. *OPTI protocol: The MIMIX system definition names must match the OptiConnect names for the systems (DSPOPCLNK). a SYSA controller would be necessary. Conversely.Naming conventions for contextual transfer definitions The following suggested naming conventions make the contextual (*ANY) transfer definitions more useful in your environment. you will receive an error message if the transfer definition has the value *ANY for either system 1 or system 2. entered from a command line. *SNA protocol: The MIMIX system definition names must match SNA environment (controller names) for the respective systems. When the VFYCMNLNK command is called from option 11 on the Work with System Definitions display or option 11 on the Work with Data Groups display. on the SYSA system there would have to be a controller called SYSB that is used for SYSA to SYSB communications. or included in automation programs. when the command is called from option 11 on the Work with Transfer Definitions display. The MIMIX system definitions should match the net attribute system name (DSPNETA). *TCP protocol: The MIMIX system definition names should correspond to DNS or host table entries that tie the names to a specific TCP address. MIMIX determine the specific system names. on SYSB. with two MIMIX systems called SYSA and SYSB. These commands do not handle transfer definitions that specify *ANY in the three-part name. Additional usage considerations for contextual transfer definitions The Run Command (RUNCMD) and the Verify Communications Link (VFYCMNLNK) commands requires specific system names to verify communications between systems. 162 . enclosed in apostrophes. do the following: 1. press F10 (Additional parameters). b. At the Threshold size (MB) prompt. then press Enter. If you are creating a transfer definition for a cluster environment. select option 21 (Work with transfer definitions) and press Enter. Optional step: If you need to set a maximum size for files and objects to be transferred. 5. From the MIMIX Cluster Menu. 4. specify the communications protocol you want. Make any necessary changes. you must accept the default of *TCP for the Transfer protocol prompt. 7. c.Creating a transfer definition Creating a transfer definition System-level communication must be configured and operational before you can use a transfer definition. At the Transfer protocol prompt. Type 1 (Create) next to the blank line at the top of the list area and press Enter. At the Transfer definition prompts. Optional step: If you need to change the relational database information. To create the transfer definition. specify a name and the two system definitions between which communications will occur. The Create Transfer Definition display appears. select option 2 (Work with transfer definitions) and press Enter. press F10 (Additional parameters). This short transfer definition name is used in generating relational database directory entry names if you specify to have MIMIX manage your RDB directory entries. 2. To create a transfer definition. If MIMIX is not managing the RDB directory entries. Access the Work with Transfer Definitions display by doing one of the following: • • From the MIMIX Configuration Menu. it may be necessary to change the RDB values. press Enter. Verify that the values shown are what you want. The Work with Transfer Definitions display appears. 8. Do the following: a. At the Description prompt. 163 . type a text description of the transfer definition. 3. See “Tips for transfer definition parameters” on page 156 for details about the Relational database (RDB) parameter. At the Short transfer definition name prompt. 6. Additional parameters for the protocol you selected appear on the display. accept the default value *GEN to generate a short transfer definition name. specify a valid value. 164 . press F10 (Additional parameters). specify a valid value. specify the value you want.Changing a transfer definition Changing a transfer definition To change a transfer definition. change. Press F10 165 . 6. Press Enter to display the parameters for the specified transfer protocol. only one autostart job entry will be created regardless of how many transfer definitions use that port number or alias. specify the desired values for each of the four elements and press Enter. From the MIMIX Configuration menu. 8. If you need to set a maximum size for files and objects to be transferred. 2. press F10 (Additional parameters). If you need to change your relational database information. Type 2 (Change) next to the definition you want and press Enter. press Enter. When *YES is specified. 3. At the Relational database (RDB) prompt. At the Manage autostart job entries prompt. 5. The Change Transfer Definition (CHGTFRDFN) display appears. An autostart job entry is created on each system related to the transfer definition. Locate the prompt for the parameter you need to change and specify the value you want. do the following: 1. If you want to change which protocol is used between the specified systems. At the Threshold size (MB) prompt. For special considerations when changing your transfer definitions that are configured to use RDB directory entries see “Tips for transfer definition parameters” on page 156. select option 2 (Work with transfer definitions) and press Enter. modify the transfer definition you plan to use as follows: 1. The Work with Transfer Definitions display appears. Press F1 (Help) for more information about the values for each parameter. or remove the autostart entry based on changes to the transfer definition. If you need to create or remove an autostart job entry for the TCP server. 7. 4. Type a 2 (Change) next to the definition you want and press Enter. MIMIX will add. The Work with Transfer Definitions display appears. before you complete this procedure refer to “Using contextual (*ANY) transfer definitions” on page 160. Contextual transfer definitions are not recommended in a remote journaling environment. Changing a transfer definition to support remote journaling If the value *ANY is specified for either system in the transfer definition. The Change Transfer Definition (CHGTFRDFN) display appears. For a given port number or alias. specify the value you want for the Transfer protocol prompt. press F10 (Additional parameters). 2. To support remote journaling. select option 2 (Work with transfer definitions) and press Enter. Access the Work with Transfer Definitions display by doing one of the following: • From the MIMIX Configuration Menu. To save changes to the transfer definition. 3. Note: See “Tips for transfer definition parameters” on page 156 for detailed information about the Relational database (RDB) parameter. At the Relational database (RDB) prompt. Also see “Finding the system database name for RDB directory entries” on page 167 for special considerations when changing your transfer definitions that are configured to use RDB directory entries. then press Page Down.(Additional parameters). specify the desired values for each of the four elements and press Enter. 166 . 4. If you did not accept default values of *GEN for the Directory entry element and *DFT for the Manage directory entries element of the RDB parameter when you created your transfer definition. 167 . do the following: 1. 2. Login to the system that was specified for System 1 in the transfer definition.Finding the system database name for RDB directory entries Finding the system database name for RDB directory entries To find the system database name. Using IBM i commands to work with RDB directory entries The Manage directory entries element of the Relational Database (RDB) parameter in the transfer definition determines whether MIMIX manages RDB directory entries. 3. or if you specified *NO for the Manage directory entries element. From the command line type DSPRDBDIRE and press Enter. Repeat steps 1 and 2 to find the system database name for System 2. you can use IBM i commands to add and change RDB directory entries. 2=Change. You can also use these options from the Work with Relational Database Directory entries display (WRKRDBDIRE command): 1=Add. Look for the relational database directory entry that has a corresponding remote location name of *LOCAL. and 5=Display details. The Change RDB Directory Entry (CHGRDBDIRE) command will change an existing RDB directory entry. The Add RDB Directory Entry (ADDRDBDIRE) command will add an entry. Press Enter. You can also start the TCP/IP server automatically through an autostart job entry. You can use the Work with Active Jobs (WRKACTJOB) command to look for a job under the MIMIXSBS subsystem with a function of PGM-LVSERVER. From a 5250 emulator. Verify that the server job is running under the MIMIX subsystem on that system. Either you can change the transfer definition to allow MIMIX to create and manage the autostart job entry for the TCP/IP server. The Start Lakeview TCP Server display appears. MIMIX only manages entries for the server when they are created by transfer definitions. or you can add your own autostart job entry. 4. Once the TCP communication connections have been defined in a transfer definition. the TCP server must be started on each of the systems identified by the transfer definition. 3. do the following on the system on which you want to start the TCP server: 1.Starting the TCP/IP server Use this procedure if you need to manually start the TCP/IP server. specify the host name or address for the local system as defined in the transfer definition. The Utilities Menu appears. 5. 168 . From the MIMIX Intermediate Main Menu. select option 13 (Utilities menu) and press Enter. At the Host name or address prompt. 6. specify the port number or alias as defined in the transfer definition for the local system. Note: Use the host name and port number (or port alias) defined in the transfer definition for the system on which you are running this command. At the Port number or alias prompt. you must have an entry in the service table on this system that equates the alias to the port number. 2. Note: If you specify an alias. Select option 51 (Start TCP server) and press Enter. Type 3 (Autostart job entries) and press enter. Changing an autostart job entry and its related job description When the host or port information for a system identified in a transfer definition changes. MIMIX supports automatically creating and managing autostart job entries for the TCP server with the MIMIXSBS subsystem. If you prefer. Identifying the current autostart job entry information This procedure enables you to identify the autostart job entry for the STRSVR command in the MIMIXSBS subsystem and display the current information within the job description associated with the entry. do the following: a. Locate the name and library of the job description for the autostart job entry for the STRSVR. this job description name is either the port alias name or PORTnnnnn where nnnnn is the port number and the library name is the name of the MIMIX installation library. The transfer definition must specify MNGAJE(*NO) and you must have an autostart job entry on each system that can use the transfer definition. The Display Autostart Job Entries display appears. The Display with Job Descriptions display appears. type the command DSPJOBD library/job_description and press Enter.Using autostart job entries to start the TCP server Using autostart job entries to start the TCP server To use TCP/IP communications. MIMIX automatically updates this information for MIMIX-managed autostart job entries when the transfer definition is updated. To display the STRSVR details specified in the job description. 4. b. Type the command DSPSBSD MIMIXQGPL/MIMIXSBS and press Enter. Press Enter. 169 . Using the job description information identified in Step 3. Because this can become a time consuming task that can be mistakenly forgotten. the MIMIX TCP/IP server must be started each time the MIMIX subsystem (MIMIXSBS) is started. The Display Subsystem Description display appears. do the following: 1. Job Description. To display the autostart job entry information. The information in this field shows the current values of the STRSVR command used by the autostart job entry. Page down to view the Request data field. you can create and manage autostart job entries yourself. and Library identify autostart job names and their job description information. The autostart job entry uses a job description that contains the STRSVR command which will automatically start the Lakeview TCP server when the MIMIXSBS subsystem is started. The STRSVR command is defined in the Request data or command (RQSDTA) parameter of the job description. 3. those changes must be also be reflected in autostart job entries for the STRSVR command and in their associated job descriptions. 2. The columns Job. MIMIX does this when transfer definitions for TCP protocol specify *YES for the Manage autostart job entries (MNGAJE) parameter. Typically. Change the JOBD parameter shown to specify the library and job description you want. Press F10 (Additional parameters). Identify the job description and library for the autostart job entry using the procedure in “Identifying the current autostart job entry information” on page 169. a. For the Job description and Library prompts. the following changes to a transfer definition require changing a user-managed autostart job entry or its associated job description on the local system: • • • A change to the port number or alias identified in the PORT1 or PORT2 parameters requires replacing the job description and autostart job entry. Important! Change only the JOBD information for the STRSVR command specified within the RQSDTA parameter. you can do the following: 1. Updating host information for a user-managed autostart job entry Use this procedure to update a user-managed autostart job entry which starts the STRSVR command with the MIMIXSBS subsystem so that the request is submitted with the correct host information. Press Enter. you must update them when the host or port information for a system in the MIMIX environment changes. Do not change the HOST or PORT values when the autostart job entry that is managed by MIMIX. if the transfer definition specifies MNGAJE(*NO) and you are managing the autostart job entries for the STRSVR command and their associated job descriptions yourself. A change to the host name or address identified in the HOST1 or HOST2 parameters requires changing the job description. the job description must be changed. If you want the STRSVR request to run using a different job description. Type CHGJOBD and press F4 (Prompt). If the transfer definition was renamed or copied so that the value of HOST1(*SYS1) or HOST2(*SYS2) no longer resolves to the same system definition system. 2. Important! Do not use this procedure for MIMIX-managed autostart job entries. the default job description used to submit the job is named MIMIXCMN in library MIMIXQGPL. Do the following: a. The Change Job Description display appears. Using a different job description for an autostart job entry When MIMIX manages autostart job entries for the STRSVR command. Autostart job entries for the server are usermanaged when the transfer definition specifies MNGAJE(*NO). c. Specifically. specify the job description and library names from in Step 1. then Page Down. The the Request data or command prompt shows the current values of the STRSVR command. 170 . b.However. This information is needed in the following step. which is the system for which information changed within the transfer definition. Important! Do not use this procedure for MIMIX-managed autostart job entries. This information is needed in the following steps. Autostart job entries for the server are user-managed when the transfer definition specifies MNGAJE(*NO). The Change Job Description display appears. specify the job description and library names from in Step 1. Updating port information for a user-managed autostart job entry This procedure identifies how to update the port information for a user-managed autostart job entry that starts the Lakeview TCP server with the MIMIXSBS subsystem. 2. The Request data or command prompt shows the current values of the STRSVR command in the following format. For the Job description and Library prompts. Type CHGJOBD and press F4 (Prompt). Remove the old job description by specify the job description name and library from Step 1 in the following command: DLTJOBD JOBD(library/job_description) 4. Do the following: 1. a. Press F10 (Additional parameters). 'installation_library/STRSVR HOST(''local_host_name'') PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)' c. 2. then Page Down to locate Request data or command (RQSDTA). job description. and library for the autostart job entry using the procedure in “Identifying the current autostart job entry information” on page 169. Do the following: 1. Identify the job name. b.Using autostart job entries to start the TCP server Perform this procedure from the local system. Identify the job description and library for the autostart job entry using the procedure in “Identifying the current autostart job entry information” on page 169. Change the value specified for HOST so that the local_host-name is the host name or address specified for the local system in the transfer definition. Create a new job description for the autostart job entry using the following command: CRTDUPOBJ OBJ(MIMIXCMN) FROMLIB(MIMIXQGPL) OBJTYPE(*JOBD) TOLIB(installation-library) NEWOBJ(job_description_name) where installation_library is the name of the library for the MIMIX installation and where job_description_name follows the recommendation to 171 . Do the following: a. which is the system for which information changed within the transfer definition. Remove the old autostart job entry by specifying the job name from Step 1 for job_name in the following command: RMVAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(job_name) 3. Perform this procedure from the local system. Press Enter. specified as either the port number or the port alias. d. c. specify the STRSVR command in the following format: 'installation_library/STRSVR HOST(''local_host_name'') PORT(nnnnn) JOBD(MIMIXQGPL/MIMIXCMN)' Where the values to specify are: • installation_library is the name of the library for the MIMIX installation • local_host_name is the host name or address from the transfer definition for the local system • nnnnn is the new port information from the transfer definition for the local system. e. The job description is changed. 172 . The Change Job Description display appears. Press Enter. Create a new autostart job entry using the following command: ADDAJE SBSD(MIMIXQGPL/MIMIXSBS) JOB(autostart_job_name) JOBD(installation_library/job_description_name) Where installation_library/job_description_name specifies the job description from Step 4 and autostart_job_name specifies the same port information and format as specified for the job description name. At the Request data or command prompt. Press F10 (Additional parameters). Page Down to locate Request data or command (RQSDTA). specify the job description and library you created in Step 4. b. 6. 5. For the Job description and Library prompts. Type CHGJOBD and press F4 (Prompt).identify the port for the local system by specifying the port number in the format PORTnnnnn or the port alias. Do the following: a. 173 . 2. 4. From the Work with System Definitions display. You should see a message indicating the link has been verified. From the MIMIX Configuration Menu. do not check the link from the local system. then MIMIX uses the Verify Communications Link command to validate the information that has been specified for the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and System 2 relational database names exist and are available on each system. type an 11 (Configuration menu) and press Enter. type an 11 (Verify communications link) next to the system definition you want and press Enter. From the MIMIX Basic Main Menu.Verifying a communications link for system definitions Verifying a communications link for system definitions Do the following to verify that the communications link defined for each system definition is operational: 1. 3. Note: If your transfer definition uses the *TCP communications protocol. type a 1 (Work with system definitions) and press Enter. this process will only verify that communications to the remote system is successful. Note: If the system manager is not active." This indicates that the remote system could not return communications to the local system. If the communications link defined for a system definition uses SNA protocol. Repeat this procedure for all system definitions. You will also see a message in the job log indicating that "communications link failed after 1 request. 3. From the MIMIX Configuration Menu. From the MIMIX Basic Main Menu. press Enter. You should see a message "VFYCMNLNK command completed successfully. ensure that the communications link for the data group is active. You will see the Verify Communications Link display for each transfer definition you selected. 5. 2. You should see a message "VFYCMNLNK command completed successfully. you will receive an error message if the transfer definition specifies the value *ANY for either system 1 or system 2. type an 11 (Verify communications link) next to the data group you want and press F4. 2. If you are checking a Transfer definition with the value of *ALL. From the Work with Data Group Definitions display. type an 11 (Verify communications link) next to all transfer definitions and press Enter. 6. be sure to check communications for each data group definition. When the command is called from option 11 on the Work with Transfer Definitions display or when entered from a command line. Do the following: 1. 1. type a 4 (Work with data group definitions) and press Enter. The Verify Communications Link display appears. use the following procedure to check all communications links. type an 11 (Configuration menu) and press Enter. For transfer definitions using TCP protocol: MIMIX uses the Verify Communications Link (VFYCMNLNK) command to validate the values specified for the Relational database (RDB) parameter." 174 . 4. The Verify Communications Link display appears. When the command is called from option 11 on the Work with System Definitions display or option 11 on the Work with Data Groups display. MIMIX identifies the specific system names. 3. MIMIX also uses VFYCMNLNK to verify that the System 1 and System 2 relational database names exist and are available on each system. If your configuration requires multiple data groups. From the Work with Transfer Definitions display. This procedure verifies the primary transfer definition used by the data group. Ensure that the values shown for the prompts are what you want. Verifying all communications links The Verify Communications Link (VFYCMNLNK) command requires specific system names to verify communications between systems.Verifying the communications link for a data group Before you synchronize data between systems. you need to specify a value for the System 1 or System 2 prompt. Ensure that the values shown for the prompts are what you want and then press Enter." If your data group definition specifies a secondary transfer definition. To start the check. Verifying the communications link for a data group 175 . the RJ link is automatically created for you when you follow the steps of the configuration checklists. “Adding a remote journal link” on page 202 describes how to create a MIMIX RJ link. The procedure is appropriate for changing a journal receiver library for the target journal in a remote journaling environment or for any other changes that affect the target journal. and if necessary. “Creating a journal definition” on page 192 provides the steps to follow for creating a journal definition. MIMIX uses the journal definition to manage the journaling environment. Configuration checklists indicate when to build the journal environment. “Journal definition considerations” on page 184 provides things to consider when creating journal definitions for remote journaling. “Building the journaling environment” on page 195 describes the journaling environment and provides the steps to follow for building it. This can be done after the journal definition is created. In most configurations. A journal definition does not automatically build the underlying journal environment that it defines. it must be built. which will in turn create a target journal definition with appropriate values to support remote journaling. “Changing a journal definition” on page 194 provides the steps to follow for changing a journal definition. to change the receiver size options. “Changing the remote journal environment” on page 200 provides steps to follow when changing an existing remote journal configuration. “Changing the journaling environment to use *MAXOPT3” on page 196 describes considerations and provides procedures for changing the journaling environment to use the *MAXOPT3 receiver size option. “Journal receiver size for replicating large object data” on page 191 provides procedures to verify that a journal receiver is large enough to accommodate large IFS stream files and files containing LOB data. “Changing a remote journal link” on page 204 describes how to change an existing RJ link. including journal receiver management. “Temporarily changing from RJ to MIMIX processing” on page 205 describes how • • • • • • • • • • • 176 . If the journal environment does not exist.Configuring journal definitions CHAPTER 9 Configuring journal definitions By creating a journal definition you identify to MIMIX a journal environment that can be used in the replication process. “Tips for journal definition parameters” on page 179 provides tips for using the more common options for journal definitions. The topics in this chapter include: • “Journal definitions created by other processes” on page 178 describes the security audit journal (QAUDJRN) and other journal definitions that are automatically created by MIMIX. • “Changing from remote journaling to MIMIX processing” on page 206 describes how to change a data group that uses remote journaling so that it uses MIMIX send processing. Remote journaling is preferred. • 177 .to change a data group configured for remote journaling to temporarily use MIMIX send processing. “Removing a remote journaling environment” on page 207 describes how to remove a remote journaling environment that you no longer need. it will be created when the first data group that replicates from the system journal is started. When you create a data group definition. 178 . Any journal definitions that are created in this manner will be named with the value specified in the data group definition. In an environment that uses MIMIX Remote Journal support.Journal definitions created by other processes When you create system definitions. If you do not already have a journaling environment for the security audit journal. the process of creating a data group definition creates a remote journal link which in turn creates the journal definition for the target journal. MIMIX automatically creates a journal definition for the security audit journal (QAUDJRN) on that system. Any journal definitions created by another process can be changed if necessary. The QAUDJRN is used only by MIMIX system journal replication processes. The target journal definition is created using values appropriate for remote journaling. MIMIX automatically creates a journal definition if one does not already exist. 179 . otherwise. the first part of the name is QAUDJRN. or @. For the journal name. When a journal definition for the security audit journal (system journal) is automatically created as a result of creating a system definition. @.Tips for journal definition parameters Tips for journal definition parameters This topic provides tips for using the more common options for journal definitions. For the journal library. The value *CRTDFT indicates that the command default value for the IBM i Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library. The remaining characters can be alphanumeric and can contain a $. the default library name is #MXJRN. its name is the first eight characters from the name of the source journal definition followed by the characters @R. it can become part of the receiver number and no longer match the journal name. Journal (JRN) This parameter specifies the qualified name of a journal to which changes to files or objects to be replicated are journaled. If a journal definition name is already in use. Journal definition (JRNDFN) This parameter is a two-part name that identifies a journaling environment on a system. #. @U. MIMIX uses the first six characters of the journal definition name to generate the journal receiver prefix. or an underscore (_). $. the first character must be either A . There are additional specific naming conventions for journal definitions that are used with remote journaling. Context-sensitive help is available online for all options on the journal definition commands. or @W. The first part of the name identifies the journal definition. MIMIX uses #MXJRNIASP for the default journal library name. all objects in the library must be in the same ASP as the library. #. @V. The second part of the name identifies a system definition which represents the system on which you want the journal to reside. @T. Journal library ASP (JRNLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal library. the name may include @S. the default value *DFT allows MIMIX to determine the library name based on the ASP in which the journal library is allocated. the default value *JRNDFN uses the name of the journal definition for the name of the journal. If that parameter specifies *ASPDEV. Note: In the first part of the name. If the last character of a prefix resulting from the journal definition name is numeric. as specified in the Journal library ASP parameter. a period (. For libraries that are created in a user ASP. MIMIX restricts the last character of the prefix from being numeric. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32.). Journal definition names cannot be UPSMON or begin with the characters MM. If the target journal definition is configured by MIMIX for use with MIMIX RJ support.Z. Target journal state (TGTSTATE) This parameter specifies the requested status of the target journal. See “Journal definition considerations” on page 184. If that parameter specifies *ASPDEV. For more information. and can be used with active journaling support or journal standby state. TIME. The prefix must be unique to the journal definition and cannot end in a numeric character. Use the value *STANDBY to journal objects on the target system while preventing most journal entries from being deposited into the target journal. Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receiver library. The value *DFT for the journal receiver library allows MIMIX to determine the library name based on the ASP in which the journal receiver is allocated. THRESHOLD. You can use the default value *CRTDFT or you can specify the number of an ASP in the range 1 through 32. Otherwise. If the journal definition will be used in a configuration which broadcasts data to multiple systems. MIMIX uses #MXJRNIASP for the default journal receiver library name. see “Configuring for high availability journal performance enhancements” on page 309.Journal receiver prefix (JRNRCVPFX) This parameter specifies the prefix to be used in the name of journal receivers associated with the journal used in the replication process and the library in which the journal receivers are located. Note: Journal standby state and journal caching require that the IBM feature for High Availability Journal Performance be installed. The Receiver change management (CHGMGT) parameter controls whether MIMIX performs change management operations for the journal receivers used in the 180 . which usually is the first six characters of the journal definition name with any trailing numeric characters removed. RESETTHLD2 or RESETTHLD) Several parameters control how journal receivers associated with the replication process are changed. The value *CRTDFT indicates that the command default value for the IBM i Create Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP) from which the system allocates storage for the library. there are additional considerations. You can also specify values *SRC. Receiver change management (CHGMGT. For libraries that are created in a user ASP. a unique six character prefix name is derived from the definition name. *TGT. all objects in the library must be in the same ASP as the library. You can specify a different name or specify the value *JRNLIB to use the same library that is used for the associated journal. The default value *GEN for the name prefix indicates that MIMIX will generate a unique prefix. or *NONE. Use the recommended default value *BOTH to perform journal caching on both the source and the target systems. If that prefix is already used in another journal definition. Journal caching (JRNCACHE) This parameter specifies whether the system should cache journal entries in main storage before writing them to disk. as specified in the Journal receiver library ASP parameter. Use the default value *ACTIVE to set the target journal state to active when the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)). the default library name is #MXJRN. This value is used when MIMIX or the system changes the receivers. The default value is 6600 MB. see “Journal receiver management” on page 37 Receiver delete management (DLTMGT. Reset large sequence threshold (RESETTHLD2) You can specify the sequence number (in millions) at which to reset the receiver sequence number. KEEPRCVCNT. • • 181 . RESETTHLD2 is recommended. Note: RESETTHLD2 accepts larger sequence number values than RESETTHLD. By default. For example. KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal receivers associated with the replication process. if you specify 2 and there are 10 journal receivers including the attached receiver (which is number 10). • Time of day to change receiver (TIME) You can specify the time of day at which MIMIX changes the journal receiver. MIMIX retains two detached receivers (8 and 9) and deletes receivers 1 through 7. You can specify a value for only one of these parameters. the change is effective with the next receiver change. Retaining unsaved receivers allows you to back out (rollback) changes in the event that you need to recover from a disaster. The shipped default value of *TIMESIZE results in MIMIX changing journal receivers by both threshold size and time of day. • Receiver threshold size (MB) (THRESHOLD) You can specify the size. if another process deletes a journal receiver before MIMIX is finished with it. the next receiver change resets the sequence number to 1. If you change the journal receiver threshold size in the journal definition.Tips for journal definition parameters replication process. All of the requirements that you specify in the following parameters must be met before MIMIX deletes a journal receiver: • Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to have MIMIX retain any unsaved journal receivers. The default value *YES causes MIMIX to keep unsaved journal receivers until they are saved. For information about how change management occurs in a remote journal environment and about using other change management choices. For example. The time is based on a 24 hour clock and must be specified in HHMMSS format. KEEPUNSAV. Keep journal receiver count (KEEPRCVCNT) You can specify the number of detached journal receivers to retain. MIMIX performs the delete management operations. MIMIX operations can be adversely affected if you allow the system or another process to handle delete management. The following parameters specify conditions that must be met before change management can occur. replication can be adversely affected. If you decide to decrease the size of the Receiver threshold size you will need to manually change your journal receiver to reflect this change. of the journal receiver at which it is changed. When the threshold is reached. in megabytes. The Receiver delete management (DLTMGT) parameter specifies whether or not MIMIX performs delete management for the journal receivers. You can specify a value for this parameter or for the RESETTHLD2 parameter. The exit program will be called when a journal receiver is changed or deleted by the MIMIX journal manager. you might want to use an exit program to save journal receivers as soon as MIMIX finishes with them so that they can be removed from the system immediately.• Keep journal receivers (days) (KEEPJRNRCV) You can specify the number of days to retain detached journal receivers. When the value *MAXOPT3 is used. see “Changing the journaling environment to use *MAXOPT3” on page 196 Minimize entry specific data (MINENTDTA) This parameter specifies which object types allow journal entries to have minimized entry-specific data. Receiver size option (RCVSIZOPT) This parameter specifies what option to use for determining the maximum size of sequence numbers in journal entries written to the attached journal receiver. the deletion may occur within a few hours after the 7 days have passed. Reset sequence threshold (RESETTHLD) You can specify the sequence number (in millions) at which to reset the receiver sequence number. it will be deleted after 7 days have passed from the time of its creation. For example. RESETTHLD2 is recommended. but not both. For information see “Journal receiver management” on page 37 Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage pool (ASP) from which the system allocates storage for the journal receivers. Exit program (EXITPGM) This parameter allows you to specify the qualified name of an exit program to use when journal receiver management is performed by MIMIX. The default value *LIBASP indicates that the storage space for the journal receivers is allocated from the same ASP that is used for the journal receiver library. For additional information about improving journaling performance with this capability. For example. see “Minimized journal entry data” on page 307. When the threshold is reached. In order for a change to take effect the journaling environment must be built. Changing this value requires that you change to a new journal receiver. For example. The exact time of the deletion may vary. For additional information. a change to this parameter requires more than one journal definition to be changed. Threshold message queue (MSGQ) This parameter specifies the qualified name of the threshold message queue to which the system sends journal-related messages such as threshold messages. the next receiver change resets the sequence number to 1. if you specify to keep the journal receiver for 7 days and the journal receiver is eligible for deletion. 182 . The value *JRNLIB for the library name indicates that the message queue uses the library for the associated journal. the journal receivers cannot be saved and restored to systems with operating system releases earlier than V5R3M0. The default value *JRNDFN for the queue name indicates that the message queue uses the same name as the journal definition. To support a switchable data group. Tips for journal definition parameters 183 . Journal entries flow from the local journal to the remote journal. The existing definition is likely to be used for journaling and therefore is not appropriate as the target journal definition for a remote journal link. you need to create journal definitions for two remote journal links (four journal definitions). The direction of a defined pair of journals cannot be switched. using an existing journal definition for the target journal definition is not recommended. If you are configuring MIMIX for a scenario in which you have one or more target systems. MIMIX recognizes the receiver change management parameters (CHGMGT.Journal definition considerations Consider the following as you create journal definitions for remote journaling: • The source journal definition identifies the local journal and the system on which the local journal exists. a new receiver with the same name is automatically attached to the remote journal. and several other values. Each remote journal link defines a local-remote journal pair that functions in only one direction. For more information. After the journal environment is built for a target journal definition. However. use the default value *GEN for the prefix name for the JRNRCVPFX on a target journal definition. attempts to start the remote journals will fail with message CPF699A (Unexpected journal receiver found). THRESHOLD. RESETTHLD2 or RESETTHLD) specified in the source journal definition and ignores those specified in the target journal definition. MIMIX ensures that the same prefix is not used more than once on the same system but cannot determine if the prefix is used on a target journal while it is being configured. Therefore. TIME. the target journal definition identifies the remote journal and the system on which the remote journal exists. the source journal definition identifies the source system of the remote journal process and the target journal definition identifies the target system of the remote journal process. Each source journal definition must specify a unique value for the Journal receiver prefix (JRNRCVPFX) parameter. see “Example journal definitions for a switchable data group” on page 185. The receiver name for source and target journals will be the same on the • • • • • 184 . there are additional considerations for the names of journal receivers. Similarly. You can use an existing journal definition as the source journal definition to identify the local journal. When you create a target journal definition instead of having it generated using the Add Remote Journal Link (ADDRJLNK) command. MIMIX cannot change the value of the target journal definition’s Journal receiver prefix (JRNRCVPFX) or Threshold message queue (MSGQ). When a new receiver is attached to the local journal. If the prefix defined by the source journal definition is reused by target journals that reside in the same library and ASP. The receiver prefix specified in the target journal definition is ignored. If you want to use the RJ process in both directions for a switchable data group. To change these values see the procedure in the IBM topic “Library Redirection with Remote Journals” in the IBM eServer iSeries Information Center. In this example. • The System is the value entered in the target journal definition system field. Example journal definitions for a switchable data group To support a switchable data group in a remote journaling environment. a switchable data group named PAYABLES is created between systems CHICAGO and NEWYORK. The Journal library ASP will be copied from source journal definition. The Journal library will use the first eight characters of the name of the source journal library followed by the characters @R. MIMIX implements the following naming conventions for the target journal definition and for the objects in its associated journaling environment. If a journal definition name is already in use. @U. System 1 (CHICAGO) is the data source. Naming convention for remote journaling environments with 2 systems If you allow MIMIX to generate the target journal definition when you create a remote journal link. and two for the RJ link used for replication in the opposite direction. or @W. @V. The Message queue library will use the first eight characters of the name of the source message queue library followed by the characters @R. the target journal definition will be named [email protected] CHICAGO. The two-part name of the target journal definition is generated as follows: • The Name is the first eight characters from the name of the source journal definition followed by the characters @R when the journal definition is created for MIMIX RJ support. The Journal receiver prefix will be copied from the source journal definition. follow these same naming conventions to reduce the potential for confusion and errors. the name may instead include @S.Journal definition considerations systems but will not be the same in the journal definitions. If you specify your own target journal definition. the prefix will be the same as that specified in the source journal definition. you need to have four journal definitions configured: two for the RJ link used for normal production-to-backup operations. For example. The Journal receiver library will use the first eight characters of the name of the source journal receiver library followed by the characters @R. In the target journal. Command defaults 185 . The data group definition specifies *YES to Use remote journal link. @T. Note: Journal definition names cannot be UPSMON or begin with the characters MM. if the source journal definition name is MYJRN and you specified TGTJRNDFN(*GEN CHICAGO). The value for the Receiver change management (CHGMGT) parameter will be *NONE. The target journal definition will have the following characteristics and associated new objects: • • • • • • • The Journal name will have the same name as the source journal. This is the source journal definition for normal operations. 1=Create 2=Change 3=Copy 4=Delete 5=Display 6=Print 10=Add RJ link 12=Work with RJ links 14=Build 17=Work with jrn attributes 24=Delete jrn environment ---. Figure 13. To create the RJ link and associated definitions for replication in the opposite direction. Example journal definitions for a switchable data group. Then that definition is used to create second RJ link.Journal ------Name Library PAYABLES MIMIXJRN PAYABLES MIMIXJRN PAYABLES [email protected] PAYABLES [email protected] . a new source journal definition. press Enter. To create the RJ link and associated journal definitions for normal operations. which in turn generates the target journal definition [email protected] CHICAGO (the third entry listed in Figure 13). is created (the second entry listed in Figure 13). Work with Journal Definitions CHICAGO Type options.create the data group using a generated short data group name and using the data group name for the system 1 and system 2 journal definitions. PAYABLES NEWYORK. The process of adding the link creates the target journal definition [email protected] NEWYORK (the last entry listed in Figure 13). option 10 (Add RJ link) on the Work with Journal Definitions display is used on an existing journal definition named PAYABLES CHICAGO (the first entry listed in Figure 13).Definition ---Name System PAYABLES CHICAGO PAYABLES NEWYORK [email protected] CHICAGO [email protected] NEWYORK -----.Management Change Delete *SYSTEM *YES *SYSTEM *YES *NONE *YES *NONE *YES 7=Rename Opt RJ Link *SRC *SRC *TGT *TGT F3=Exit F12=Cancel F4=Prompt F18=Subset F5=Refresh F21=Print list Bottom F6=Create F22=Work with RJ links 186 . Example of RJ links for a switchable data group. it is possible that each node that is a management system is also both a source and target for replication activity. specify journal library and receiver library names that include a two-character identifier. Work with RJ Links System: Type options. like this: [email protected] the Work with RJ Links display (Figure 14) shows the association between journal definitions much more clearly. the following is strongly recommended: • Limit the data group name to six characters. 1. more than one system serves as the management system for MIMIX operations. nn. Library name-mapping . to represent the node of the associated source (local journal).In target journal definitions. The following manually implemented naming convention ensures that journal receivers have unique names. Also include this identifier at the end of the target journal definition name. 187 . This convention allows for the use of the same local journal name for all data groups and places all journals and receivers from the same source in the same library. In a MIMIX environment that uses multimanagement functions1. A Vision Cluster1 access code is required for multimanagement functions. Figure 14. press Enter. To ensure that journal receivers in a multimanagement environment have unique names. In a multimanagement environment.Journal definition considerations Identifying the correct journal definition on the Work with Journal Definition display can be confusing. Fortunately. 1=Add 2=Change 4=Remove 14=Build 15=Remove RJ connection 24=Delete target jrn environment ---Source Jrn Def--Name System PAYABLES PAYABLES CHICAGO NEWYORK 5=Display 6=Print 9=Start 17=Work with jrn attributes CHICAGO 10=End Opt ---Target Jrn Def--Name System [email protected] [email protected] NEWYORK CHICAGO Priority *SYSDFT *SYSDFT Dlvry *ASYNC *ASYNC State *INACTIVE *INACTIVE Bottom Parameters or command ===> F3=Exit F4=Prompt F12=Cancel F13=Repeat F5=Refresh F6=Add F16=Jrn Definitions F9=Retrieve F18=Subset F11=View 2 F21=Print list Naming convention for multimanagement environments The IBM i remote journal function requires unique names for the local journal receiver and the remote receiver. Place this identifier before the remote journal indicator @R at the end of the name. This will simplify keeping an association between the data group name and the names of associated journal definitions by allowing space for the source node identifier within those names. Each graphic identifies one node as a replication source. when SYS01 is the source. In each graphic. Journal definitions created when a data group is created may not have unique names and will not create all the necessary target journal definitions. Once the appropriately named journal definitions are created for source and target systems. The data group name ABC. manually create the remote journal links between them (ADDRJLNK command). Figure 15 also includes a list of all the journal definitions associated with all nodes from this example as they would appear on the Work with Journal definitions display. all three nodes are designated as management systems. When implementing the naming convention. journal definition ABC SYS01 identifies the local journal on SYS01. The source identifier 01 appears target journal definitions [email protected] SYS02 and [email protected] SYS03 and in the library names defined within each. 188 .• Manually create journal definitions (CRTJRNDFN command) using the library name-mapping convention. • Example journal definitions for three management nodes The following figures illustrate the library-mapping naming convention for journal definitions in a multimanagement environment with three nodes. library name-mapping is evident in the names shown for the target journal definitions and their journal and receiver libraries. a three node environment is shown in three separate graphics. This technique is illustrated in the example. Library-mapping example: In Figure 15. with arrows pointing to the possible target nodes and lists the journal definitions needed to replicate from that source. it is helpful to consider one source node at a time and create all the journal definitions necessary for replication from that source. For example. In this example. three node environment.Journal definition considerations Figure 15. All nodes are management systems 189 . Library-mapped journal definitions . Library-mapped names as shown within the RJ links for a three node environment 190 . Figure 16.Figure 16 shows the RJ links needed for this example. From a command line. 3. Enter the command installation-library/WRKJRNDFN 2. do the following from the system where the source journal definition is located: 1.. View the Receiver size options field to see how the journal is configured. 3. 2. For data groups that can be switched. the journal receivers on both the source and target systems must be large enough to accomodate the data. specify a value that indicates support for large journal entries. At the Journal prompt. You may need to change your journal receiver size in order to accommodate the data. type a 17 (Work with journal attributes). The value should indicate support for large journal entries. Changing journal receiver size options To change the journal receiver size. Note: Make sure the other systems in your environment are compatible in size. such as *MAXOPT2 or *MAXOPT3. Verifying journal receiver size options To display the current journal receiver size options for journals used by MIMIX. type CHGJRN (Change Journal) and press F4 to prompt. enter the journal and library names for the journal you wish to change. At the Receiver size option prompt. do the following: 1. Next to the journal definition for the system you are on. it is important that your journal receiver is large enough to accommodate the data. The values *MAXOPT2 and *MAXOPT3 support journal entries up to 4 GB. 191 .Journal receiver size for replicating large object data Journal receiver size for replicating large object data For potentially large IFS stream files and files containing LOB data. as follows: a. 2. The recommended default value is *BOTH. specify whether the system should cache journal entries in main storage before writing them to disk. type an 11 (Configuration menu) and press Enter. Verify that the values shown are what you want and. 7. c. This value can be used with active journaling support or journal standby state. Verify that the following prompts contain the values that you want. is required for journal standby state in Step 6 and journal caching in Step 7. Type 1 (Create) next to the blank line at the top of the list area and press Enter. 6. b. specify a two-part name. 8. From the MIMIX Configuration Menu select option 3 (Work with journal definitions) and press Enter. specify the value you want. Note: Journal definition names cannot be UPSMON or begin with the characters MM. Receiver threshold size (MB) 192 . For more information see “Configuring for high availability journal performance enhancements” on page 309. Set the values you need to manage changing journal receivers. 4. If you have not journaled before.Creating a journal definition Do the following to create a journal definition: 1. 3. The default value is *ACTIVE. if necessary. Journal Library Journal library ASP Journal receiver prefix Library Journal receiver library ASP Important! The IBM feature for High Availability Journal Performance. specify the information you need. At the Target journal state prompt. press F1 (Help). At the Receiver change management prompt. The Create Journal Definition display appears. At the Journal caching prompt. change the values. 5. Press Enter. If you need to identify an existing journaling environment to MIMIX. For more information about valid combinations of values. At the Journal definition prompts. One or more additional prompts related to receiver change management appear on the display. The Work with Journal Definitions display appears. specify the requested status of the target journal. The default values are recommended. the default values are appropriate. From the MIMIX Basic Main Menu. press F10 (Additional parameters). Press Enter. Press Enter. 9. This step is optional. 193 . press Enter. If you want to access additional parameters that are considered advanced functions. One or more additional prompts related to receiver delete management appear on the display. type a brief text description of the journal definition. 12. c. At the Description prompt. as follows: a. Set the values you need to manage deleting journal receivers. To create the journal definition.Creating a journal definition Time of day to change receiver Reset large sequence threshold d. Keep unsaved journal receivers Keep journal receiver count Keep journal receivers (days) 10. If necessary. It is recommended that you accept the default value *YES for the Receiver delete management prompt to allow MIMIX to perform delete management. Make any changes you need to the additional prompts that appear on the display. 11. change the values. b. Before a change to any other parameter is effective. Type 2 (Change) next to the definition you want and press Enter. When the Work with System Definitions display appears. 6. 5. from the MIMIX Configuration Menu select option 3 (Work with journal definitions) and press Enter. • 2. In a standard MIMIX environment. Rebuilding the journal environment ensures that it matches the journal definition and prevents problems starting the data group. type 12 (Journal Definitions) next to the system name you want and press Enter. The Change Journal Definition (CHGJRNDFN) display appears. from the MIMIX Cluster Menu select option 20 (Work with system definitions) and press Enter. Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective with the next receiver change. you must rebuild the journal environment. do the following: 1. 4. To accept the changes. Press F1 (Help) for more information about the values for each parameter. Make any changes you need to the prompts. The Work with Journal Definitions display appears. press Enter. When the additional parameters appear on the display.Changing a journal definition To change a journal definition. If you need to access advanced functions. press F10 (Additional parameters). 3. Press Enter twice to see all prompts for the display. 194 . Access the Work with Journal Definitions display according to your configuration needs: • In a clustering environment. make the changes you need. you must change it to an unused value. The Build Journal Environment (BLDJRNENV) command is used to build the journal environment objects for a journal definition. To build the journaling environment. message queue and library. From the MIMIX Configuration Menu. the Source for values (JRNVAL) parameter of the BLDJRNENV command is used to determine the source for the values of these objects. If you do not build either source or target journal environments. A journaling environment includes the following objects: library. it is recommended to build the source journaling environments for both directions of replication so the environments exist for data group replication after switching. select one of the following and press Enter: • • Select 8 (Work with remote journal links) to build the journaling environments for remote journaling. From the MIMIX Main Menu. type 14 (Build) next to the journal definition you want 195 . If the journal receiver prefix in the specified library is already used. The journal receiver prefix and library. For switchable data groups not specified to journal on the target system. they are created based on what is specified in the journal definition. the values specified in the journal definition (*JRNDFN) are only applicable to the source journal. Specifying *JRNDFN for the JRNVAL parameter changes the values of the journal environment objects to match the values of the objects in the journal definition. Select 3 (Work with journal definitions) to build all other journaling environments. 3. journal. From the Work with display. 1. select 11 (Configuration menu) and press Enter. Note: When building a journal environment. If the journal exists. the journal environment for all journal definitions used by that data group must be created on each system. All previous steps in your configuration checklist must be complete before you use this procedure. 2. journal receiver. perform this procedure for both the source and target systems. When the BLDJRNENV command is run. if the objects do not exist. the journal environment must be built on each system that will be a target system for replication of that data group.Building the journaling environment Building the journaling environment Before replication for a data group can occur. In a remote journal environment. ensure the journal receiver prefix in the specified library is not already used. do the following: Note: If you are journaling on the target system. Specifying *JRNENV for the JRNVAL parameter changes the values of the objects in the journal definition to match the values in the existing journal environment objects. the first time the data group starts MIMIX will automatically build the journal environments for you. and threshold parameters are updated from the source specified in the JRNVAL parameter. and threshold message queue on the system specified in the journal definition. If the data group definition specifies to journal on the target system. e. the command is called twice (first for the source journal definition and then for the target journal definition). From the Work with DG Definitions display. Option 14 calls the Build Journal Environment (BLDJRNENV) command.to build and press Enter. When a journal definition for the system journal (QAUDJRN) is changed to use *MAXOPT3 support. The Work with Journal Definitions display is subsetted to the journal definitions for the data group. 4. Before you use this procedure. For environments using remote journaling. d. Type the command CHGJRN and press F4 (Prompt): b. specify the name of the journal from the journal definition. Press Enter. Enter the command WRKDGDFN b. If you plan to journal access paths. c. do the following: a. To do this. Specify *NONE for the RCVSIZOPT parameter. consider the following: • • • • Determine which journal definitions must be changed. A journal definition that is changed to use *MAXOPT3 support affects all data groups which use the journal definition. For the JRN parameter. do the following from each system in the data group: a. c. Doing so prevents sequence numbers from being reset unexpectedly. The default value for the journal sequence reset threshold changes when using • 196 . Type 17 (Work with jrn attributes) next to the definition that is the source for the local system. A status message is issued indicating that the journal environment was created for each system. 5. you need to change the value of the receiver size options. Changing the journaling environment to use *MAXOPT3 This procedure changes journal definitions and builds the journaling environments necessary in order to use a journal with a receiver size option of *MAXOPT3. To verify that the source journals have been created for a data group. Specify *GEN for the JRNRCV parameter. Table 28 identifies requirements according to the data group configuration. The additional MIMIX installations must be running version 6 software and must have their journal definitions for the system journal changed to use *MAXOPT3 support. any additional MIMIX installations on the same system must also use *MAXOPT3 support for the system journal. type 12 (Journal definitions) next the data group and press Enter. Switchable data groups require that journal definitions be changed for both source and target journals. • When the value *MAXOPT3 is used.When ending. Procedures within this topic will direct how to: • Prepare for a controlled end of a data group • Perform the controlled end . 2. Data Group Configuration Replicates From User journal with remote journaling Switchable Yes Journal definition for normal source system (local) Journal definition for normal target system (remote. Updates should be made to any automation that uses journal sequence numbers with MIMIX and any journal receiver management exit programs or monitors with an event class (EVTCLS) of *JRN.This includes how to check for and resolve any open commits. do the following: a. Journal definitions to change when converting to *MAXOPT3. If commitment control is used. End replication in a controlled manner using topic “Ending a data group in a controlled manner” in the Using MIMIX book. @R) Journal definition for source system (local) Journal definition for target system (remote. Journal Definitions to Change Table 28.Changing the journaling environment to use *MAXOPT3 *MAXOPT3. For data groups which use the journal definitions that will be changed. From the management system. Note: Resolve any open commits before continuing. @R) Journal definition for switched source system (local) Journal definition for switched target system (remote. select option 11 (Configuration menu) on the 197 . If your sequence numbers will exceed 10 digits. updates must be made to use the MIMIX command and outfile fields that support sequence numbers with more than 10 digits. @R) Journal definition for source system Journal definition for target system Journal definition for source system QAUDJRN journal definition for source system QAUDJRN journal definition for target system QAUDJRN journal definition for source system No User journal with MIMIX source-send processing System journal (QAUDJRN) Yes No Yes No Do the following: 1. the journal receivers cannot be saved and restored to systems with operating system releases earlier than V5R3M0. • Confirm the end request completed without problems . specify *ALL for the Process prompt and *CNTRLD for the End process prompt. b. ensure that there are no open commit cycles. From the appropriate system (source or target). as indicated in Table 28. d. From the Work with Journal Definitions display. From the Work with Journal Definitions display. do the following to a journal definition: a.MIMIX Main Menu. Do the following: a. 6. Verify that *MAXOPT3 is specified for the Receiver size option. MIMIX will automatically use the default value associated the value you specify for the receiver size option in Step 3d. c. the value should be between 9901 and 18446640000000. Do the following: a. b. Verify that *MAXOPT3 is specified as one of the values for the Receiver size options field. e. 198 . Type option 2 (Change) next to a journal definition and press Enter. Repeat Step 3 for each of the journal definitions you need to change. Then select option 3 (Work with journal definitions) to access the Work with Journal Definitions display. • From the target system. type 17 (Work with jrn attributes) next to a changed target journal definition and press Enter. access the Work with Journal Definitions display. b. Note: For remote journaling environments. After all the necessary journal definitions are changed. If no new value is specified. type a 5 (Display) next to each changed journal definition and press Enter. Optionally. 3. If you did not specify a value. specify *MAXOPT3. type a 14 (Build) next to the journal definitions you changed and press Enter. Verify that the changed journal definitions have appropriate values. From the Work with Journal Definitions display. c. only perform this step for a source journal definition. b. f. 5. Building the environment for the source journal will automatically result in the building of the environment for the associated target journal definition. Then do the following: • From the source system. type 17 (Work with jrn attributes) next to a changed source journal definition and press Enter. continue with the next step. 4. 7. Verify that the journals have been changed and now have appropriate values. Update any automation programs. Press Enter. Press F10 (Additional parameters). specify a value for the Reset large sequence threshold prompt. At the Receiver size option prompt. Any programs that include journal sequence numbers must be changed to use the Reset large sequence threshold (RESETTHLD2) and the Receiver size option (RCVSIZOPT) parameters. Verify that the Reset large sequence threshold prompt contains the value you specified for Step 3b. Refer to topic “Starting selected data group processes” in the Using MIMIX book. 199 . Start the data groups using default values.Changing the journaling environment to use *MAXOPT3 8. Verify that the remote journal link is not in use on both systems. From the Work with RJ Links display. 3. Verify that no other data groups use the RJ link using topic “Identifying data groups that use an RJ link” on page 283. Specify the following on the ENDDG command: • • • *ALL for the Process prompt *CNTRLD for the End process prompt *YES for the End remote journaling prompt. 5. Remove the connection to the remote journal as follows: a. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter. Access the journal definitions for the data group whose environment you want to change. 200 . From the Work with Data Groups display. do the following to delete the target system objects associated with the RJ link: Note: The target journal definition will end with @R. b. You can select either the source or target journal definition. c. press Enter. 4. 2. These steps can be used for synchronous or asynchronous remote journals. From the Work with RJ Links display. For example. To continue removing the connections for the selected links. type a 45 (Journal definitions) next to the data group that you want and press Enter. 1. Note: The target journal definition will end with @R. Use topic “Displaying status of a remote journal link” in the Using MIMIX book. The remote journal link should have a state value of *INACTIVE before you continue. choose the link based on the name in the Target Jrn Def column. Use topic “Ending a data group in a controlled manner” in the Using MIMIX book to prepare for and perform a controlled end of the data group and end the RJ link. A confirmation display appears. a. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. Failure to follow the proper sequence can introduce errors in replication and journal management. Important! Changing the RJ environment must be done in the correct sequence.Changing the remote journal environment Use the following checklist to guide you through the process of changing an existing remote journal configuration. Perform these tasks from the MIMIX management system unless these instructions indicate otherwise. Type a 15 (Remove RJ connection) next to the link with the target journal definition you want and press Enter d. this procedure is appropriate for changing a journal receiver library for the target journal in a remote journaling (RJ) environment or for any other changes that affect the target journal. Start the data group using default values Refer to topic “Starting selected data group processes” in the Using MIMIX book. 10. 6. and the journal receiver. b. A confirmation display appears. Press F12 to return to the Work with Journal Definitions display. Return to the Work with Data Groups display. To continue deleting the journal. From the Work with Journal Definitions display. b. its associated message queue. From the Work with Data Groups display. 9. Specify the receiver name from Step 8b as the value for the Starting journal receiver (STRRCV) and press Enter. press Enter. Type a 9 (Start) next to the link with the target journal definition and press F4 (Prompt) c. Locate the link you want based on the name in the Target Jrn Def column.Changing the remote journal environment b. Type an 8 (Display status) next to the data group you want and press Enter. type a 44 (RJ links) next to the data group you want and press Enter. The Start Remote Journal Link (STRRJLNK) appears. to change the target (remote) journal definition to a new receiver library. Make the changes you need for the target journal. do the following: a. For example. type a 14 (Build) next to the target journal definition and press Enter. Do the following to start the RJ link: a. Then do the following: a. 7. b. 201 . Locate the name of the receiver in the Last Read field for the Database process. 8. Type option 2 (Change) next to the journal definition for the target system you want and press Enter. Note: The target journal definition will end with @R. enclosed in apostrophes. 7. The journal definition you selected in the previous step appears in the prompts for the Source journal definition. Refer to the related topic on considerations for creating journal definitions for remote journaling for more information. To create the link between journal definitions. Before you create the RJ link you should be familiar with the “Journal definition considerations” on page 184. At the Description prompt. • • • • • Delivery Sending task priority Primary transfer definition Secondary transfer definition If you are using an independent ASP in this configuration you also need to identify the auxiliary storage pools (ASPs) from which the journal and journal receiver used by the remote journal are allocated. 4. If necessary.Adding a remote journal link This procedure requires that a source journal definition exists. do the following: 1. To create a link between journal definitions. the definition must exist and you are responsible for ensuring that its values comply with the recommended values. and Journal receiver lib ASP dev as needed. Type a 10 (Add RJ link) next to the journal definition you want and press Enter. At the Target journal definition prompts. 202 . 6. The process of creating an RJ link will create the target journal definition with appropriate values for remote journaling. 5. Note: If you specify the name of a journal definition. Journal receiver library ASP. press Enter. specify *GEN as the Name and specify the value you want for System. select option 3 (Work with journal definitions) and press Enter. Verify that this is the definition you want as the source for RJ processing. Journal library ASP device. Verify and change the values for Journal library ASP. type a text description of the link. 2. The Work with Journal Definitions display appears. Verify that the values for the following prompts are what you want. From the MIMIX Configuration menu. The Add Remote Journal Link (ADDRJLNK) display appears. 3. change the values. Adding a remote journal link 203 . you need to use topic “Building the journaling environment” on page 195. Start the data group which uses the RJ link. If you removed the RJ connection in Step 1. 204 . press Enter. type a 2 (Change) next to the entry you want and press Enter. The Change Remote Journal Link (CHGRJLNK) display appears. To make the changes effective. do the following: 1. From the Work with RJ Links display. do the following: a. Note: If you plan to change the primary transfer definition or secondary transfer definition to a definition that uses a different RDB directory entry.Changing a remote journal link Changes to the delivery and sending task priority take effect only after the remote journal link has been ended and restarted. The Using MIMIX book describes how to end only the RJ link. Use topic “Removing a remote journaling environment” on page 207 before changing the remote journal link. To change characteristics of the link between source and target journal definitions. 3. When you are ready to accept the changes. 5. Before you change a remote journal link. b. 2. end activity for the link. you also need to remove the existing connection between objects. Specify the values you want for the following prompts: • • • • • Delivery Sending task priority Primary transfer definition Secondary transfer definition Description 4. 205 . Use the procedure “Ending a data group in a controlled manner” in the Using MIMIX book to prepare for and perform a controlled end of the data group and end the RJ link. The Change Data Group Definition (CHGDGDFN) display appears. For the data group you want to change. b. From the Work with DG Definitions display. 2. 4. Modify the data group definition as follows: a. you must complete the procedure in “Checklist: Converting to legacy cooperative processing” on page 138 before you remove remote journaling. On the Work with Data Groups display. the data group should change to show a red “L” in the Source DB column. Specify *NO for the Use remote journal link prompt. Specify the following on the ENDDG command: • • • *ALL for the Process prompt *CNTRLD for the End process prompt *YES for the End remote journaling prompt. temporarily need to revert to MIMIX send processing. specifying *ALL for the Start Process prompt. Important! If the data group is configured for MIMIX Dynamic Apply. c. Use the procedure “Starting selected data group processes” in the Using MIMIX book. Press Enter to see additional prompts. 3. for some reason. Verify that the process is ended. To accept the change press Enter. type a 2 (Change) next to the data group you want and press Enter. do the following: 1. d.Temporarily changing from RJ to MIMIX processing Temporarily changing from RJ to MIMIX processing This procedure is appropriate for when you plan to continue using remote journaling as your primary means of transporting data to the target system but. Step 2 verifies that the RJ link is not in use by any other processes or data groups before ending and removing the RJ environment. From the Work with DG Definitions display.Changing from remote journaling to MIMIX processing Use this procedure when you no longer want to use remote journaling for a data group and want to permanently change the data group to use MIMIX send processing. Press Enter to see additional prompts. 4. specify the following: • • *ALL for the Process prompt *CNTRLD for the End process prompt Note: Do not end the RJ link at this time. Perform these tasks from the MIMIX management system unless these instructions indicate otherwise. you must complete the procedure in “Checklist: Converting to legacy cooperative processing” on page 138 before you remove remote journaling. 2. type a 2 (Change) next to the data group you want and press Enter. To accept the change. Modify the data group definition as follows: a. On the ENDDG command. Important! If the data group is configured for MIMIX Dynamic Apply. Perform a controlled end for the data group that you want to change using topic “Ending a data group in a controlled manner” in the Using MIMIX book. Start data group replication using the procedure “Starting selected data group processes” in the Using MIMIX book and specify *ALL for the Start processes prompt (PRC parameter). 206 . 3. The Change Data Group Definition (CHGDGDFN) display appears. 1. Specify *NO for the Use remote journal link prompt. press Enter. c. b. d. Perform the procedure in topic “Removing a remote journaling environment” on page 207. If the data group needs to remain operable using remote journaling. b. do the following to remove the connection to the remote journal: a. You can select either the source or target journal definition. press Enter. When the data group is ended. A confirmation display appears. From the Work with Data Groups display. type a 15 (Remove RJ connection) next to the link that you want and press Enter. d. Possible courses of action are: • If the data group is being converted to use MIMIX send processing or if the data group will no longer be used. This procedure removes configuration elements and system objects necessary for data group replication with remote journaling. • 207 . ensure that you choose the link you want. This procedure removes configuration elements and system objects necessary for replication with remote journaling 2. Refer to topics “Ending a remote journal link independently” and “Checking status of a remote journal link” in the Using MIMIX book. End the remote journal link and verify that it has a state value of *INACTIVE before you continue. 1. Note: If more than one RJ link is available for the data group.Removing a remote journaling environment Removing a remote journaling environment Use this procedure when you want to remove a remote journaling environment that you no longer need. continue with Step 2 of this procedure. do the following to delete the target system objects associated with the RJ link: a. 3. Type a 24 (Delete target jrn environment) next to the link that you want and press Enter. To continue removing the connections for the selected links. perform a controlled end of the data group. From the management system. type a 45 (Journal definitions) next to the data group that you want and press Enter. do not continue with this procedure. From the Work with RJ Links display. check with your MIMIX administrator and determine how to proceed. Access the journal definitions for the data group whose environment you want to change. From the Work with RJ Links display. Use “Identifying data groups that use an RJ link” on page 283. Type a 12 (Work with RJ links) next to either journal definition you want and press Enter. c. Verify that the remote journal link is not used by any data group. Attention: Do not continue with this procedure if you identified a data group that uses the remote journal link and the data group must continue to be operational. 4. If you identify a data group that uses the remote journal link. its link to the source journal definition is removed. its associated message queue. Delete the target journal definition using topic “Deleting a Definition” in the Using MIMIX book. To continue deleting the journal. 6. the journal receiver.b. Use option 4 (Delete) on the Work with Monitors display to delete the RJLNK monitors which have the same name as the RJ link. press Enter. A confirmation display appears. When you delete the target journal definition. and to remove the connection to the source journal receiver. 208 . 5. CHAPTER 10 Configuring data group definitions By creating a data group definition. Once data group definitions exist for MIMIX. “Changing a data group definition” on page 225 provides the steps to follow for changing a data group definition. you identify to MIMIX the characteristics of how replication occurs between two systems. “Fine-tuning backlog warning thresholds for a data group” on page 225 describes what to consider when adjusting the values at which the backlog warning thresholds are triggered. “Creating a data group definition” on page 221 provides the steps to follow for creating a data group definition. You must have at least one data group definition in order to perform replication. 209 . In an Intra environment. The topics in this chapter include: • • • • “Tips for data group parameters” on page 210 provides tips for using the more common options for data group definitions. a data group definition defines how replication occurs between the two product libraries used by INTRA. they can also be used by the MIMIX Promoter product. Data group names (DGDFN.Z. MIMIX recommends using the value *RCYDMN in System 1 and System 2 fields for Peer CRGs. you need to have second data group with 210 . or @. a period (. you might name your data group SUPERAPP MEXICITY CHICAGO. Allow to be switched (ALWSWT) This parameter determines whether the direction in which data is replicated between systems can be switched. If you do not allow switching directions. @. where the backup system is defined as the management system for MIMIX. Context-sensitive help is available online for all options on the data group definition commands.Tips for data group parameters This topic provides tips for using the more common options for data group definitions. MIMIX will generate this prefix for you when the default *GEN is used. Data group names cannot be UPSMON or begin with the characters MM. For many users normal replication occurs from a production system to a backup system. The short name must be unique to the MIMIX cluster and cannot be changed after the data group is created. Shipped default values for the Create Data Group Definition (CRTDGDFN) command result in data groups configured for MIMIX Dynamic Apply. the first character must be either A . For additional information see Table 12 in “Considerations for LF and PF files” on page 96. The Data group definition (DGDFN) is a three-part name that uniquely identifies a data group. use the default value *YES. The Short data group name (DGSHORTNAM) parameter indicates an abbreviated name used as a prefix to identify jobs associated with a data group. Notes: • In the first part of the name. you may find it helpful if you specify them in the order in which replication occurs during normal operations. #.). If you plan to use the data group for high availability purposes. if you normally replicate data for an application from a production system (MEXICITY) to a backup system (CHICAGO) and the backup system is the management system for the MIMIX cluster. $. For Clustering environments only. Data source (DTASRC) This parameter indicates which of the systems in the data group definition is used as the source of data for replication. The three-part name must be unique to a MIMIX cluster. This allows you to use one data group for replicating data in either direction between the two systems. For example. The remaining characters can be alphanumeric and can contain a $. The first part of the name identifies the data group. The second and third parts of the name (System 1 and System 2) specify system definitions representing the systems between which the files and objects associated with the data group are replicated. DGSHORTNAM) These parameters identify the data group. • One of the system definitions specified must represent a management system. Refer to “Additional considerations for data groups” on page 220 for more information. Although you can specify the system definitions in any order. #. or an underscore (_). MIMIX uses the name PRIMARY for a value of the primary transfer definition (PRITFRDFN) parameter and for the first part of the name of a transfer definition. ASPGRP1. in the event of a switch to the direction of replication. NBRDBAPY. Replication of files with some types of referential constraint actions may require a value of *YES. see “Requirements and limitations of MIMIX Dynamic Apply” on page 101 Note: In Clustering environments only. of the data group. it is used if the communications path specified in the primary transfer definition is not available. ASPGRP2. Data group types of *ALL may also include tracking entries. JRNDFN2. Jobs wait for the time you specify even when new entries arrive in the journal. Journal on target (JRNTGT) The default value *YES enables journaling on the target system. Data group type (TYPE) The default value *ALL indicates that the data group can be used by both user journal and system journal replication processes. respectively. JRNDFN1. and data queues. If you specify a secondary transfer definition (SECTRFDFN). By default. activity against those files may not be properly recorded for replication. Data group types of *ALL or *DB include database files. you manually start journaling on the target system before allowing users to access the files. DBJRNPRC) These parameters apply to data groups that can include database files or tracking entries. If you specify *NO. it continues to use it even after the primary communication path becomes available again. The name you specify in these parameters must match the first part of a transfer definition name. RJLNK. MIMIX Dynamic Apply also supports the value *DB. you must ensure that.Tips for data group parameters similar attributes in which the roles of source and target are reversed in order to support high availability. For more information. The value *ALL is required for user journal replication of IFS objects. data areas. System 1 journal definition (JRNDFN1) and System 2 journal definition (JRNDFN2) parameters identify the user journal definitions associated with the systems defined as System 1 and System 2. The value *DGDFN indicates that the journal definition has the same name as the data 211 . Common database parameters (JRNTGT. Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of seconds that the send process waits when there are no entries available to process. This enables you to use the same data group for all of the replicated data for an application. COOPJRN. Otherwise. Jobs go into a delay state when there are no entries to process. Once MIMIX starts using the secondary transfer definition. This provides you with support for system values and other system attributes that MIMIX currently does not support. SECTFRDFN) These parameters identify the transfer definitions used to communicate between the systems defined by the data group. Transfer definitions (PRITFRDFN. For additional information. the data group value of *PEER is available. see “Considerations for LF and PF files” on page 96. A value of 0 uses more system resources. which allows you to switch the direction of a data group more quickly. If those requirements are not met. For more information see “Benefits of independent ASPs” on page 519. When ADDRJLNK is run during the creation of a data group. *YES. Specify a value when you want to replicate IFS objects from a user journal or when you want to replicate objects from ASPs 33 or higher. Each element of the parameter identifies a criteria that can be set to either *SEND or *IGNORE. JRNDFN1. You may need to build the journaling environment for these journal definitions. Number of DB apply sessions (NBRDBAPY) You can specify the number of apply sessions allowed to process the data for the data group. a journal definition is created. This value results in the automatic creation of the journal definitions (CRTJRNDFN command) and the RJ link (ADDRJLNK command).group definition. DB journal entry processing (DBJRNPRC) This parameter allows you to specify several criteria that MIMIX will use to filter user journal entries before they reach the database apply (DBAPY) process. MIMIX Dynamic Apply requires the value *YES. the shipped default value *DFT resolves to *USRJRN (user journal) when configuration requirements for MIMIX Dynamic Apply are met. When you create the data group definition. The value *NONE allows replication from libraries in the system ASP and basic user ASPs 2-32. and JRNDFN2 parameters interact to automatically create as much of the journaling environment as possible. ALWSWT. If you specify to journal on the target system and the journal definition for the target system does not exist. The value *NO is appropriate when MIMIX source-send processes must be used. The DTASRC. if needed. System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2) parameters identify the name of the primary auxiliary storage pool (ASP) device within an ASP group on each system. The names of journal definitions created in this way are taken from the values of the JRNDFN1 and JRNDFN2 parameters according to which system is considered the source system at the time they are created. For data groups configured to use MIMIX 212 . *DFT resolves to *SYSJRN and cooperative processing is performed through system journal replication processes. Cooperative journal (COOPJRN) This parameter determines whether cooperatively processed operations for journaled objects are performed primarily by user (database) journal replication processes or system (audit) journal replication processes. For data groups created on version 5. JRNTGT. The default value. Use remote journal link (RJLNK) This parameter identifies how journal entries are moved to the target system. if the journal definition for the source system does not exist. The DTASRC parameter determines whether system 1 or system 2 is the source system for the data group. the data group transfer definition names are used for the ADDRJLNK transfer definition parameters. The RJ link defines the source and target journal definitions and the connection between them. Cooperative processing through the user journal is recommended and is called MIMIX Dynamic Apply. that journal definition is also created. The value *SEND causes the journal entries meeting the criteria to be processed and sent to the database apply process. uses remote journaling to transfer data to the target system. the before-images are often required and you should specify *SEND. When you specify a value for the interval. Synchronization check interval (SYNCCHKITV) This parameter. For MIMIX to use this feature. Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog threshold criteria for the remote journal function. the journal image file entry option (FEOPT parameter) must allow before-image journaling (*BOTH). in minutes. These parameters are considered advanced configuration topics. *SEND is also required for the IBM RMVJRNCHG (Remove Journal Change) command. a synchronization check entry is sent to the apply process on the target system. For files not in data group This criteria determines whether journal entries for files not defined to the data group are filtered out. Time stamps are used to evaluate performance. If *NONE is specified for a criterion. The threshold can be specified as a time difference. Not used by MIMIX This criteria determines whether journal entries not used by MIMIX are filtered out. between the timestamp of the last source journal entry and the timestamp of the last remote journal entry. such as keyed replication. *SEND can minimize the amount of data that is sent over a communications path. Certain database techniques. When the backlog reaches any of the specified criterion. allows you to specify the number of entries to process before MIMIX creates a time stamp entry. a number of journal entries.Tips for data group parameters source-send processes. If you use keyed replication. the value is the number of journal entries that have not been sent from the local journal to the remote journal. Generated by MIMIX activity This criteria determines whether journal entries resulting from the MIMIX database apply process are filtered out. The value *IGNORE prevents the entries from being sent to the database apply process. may require that an element be set to a specific value. MIMIX puts the data group file entry on hold and stops applying journal entries. If there is a synchronization problem. • • • Additional parameters: Use F10 (Additional parameters) to access the following parameters. The following available elements describe how journal entries are handled by the database reader (DBRDR) or the database send (DBSND) processes. byte for byte). The apply process compares the before-image to the image in the file (the entire record. allows you to specify how many before-image entries to process between synchronization checks. The synchronization check transactions still occur even if you specify to ignore before-images in the DB journal entry processing (DBJRNPRC) parameter. 213 . the value is amount of time. • Before images This criteria determines whether before-image journal entries are filtered out before reaching the database apply process. which is only valid for database processing. When a number of journal entries is specified. When a time difference is specified. which is only valid for database processing. the threshold exceeded condition is indicated in the status of the RJ link. that criterion is not considered when determining whether the backlog has reached the threshold. Time stamp interval (TSPITV) This parameter. See “Additional considerations for data groups” on page 220 for more information. Note: The TSPITV parameter does not apply for remote journaling (RJ) data groups. or both. A lower value results in more accurate status information. high-volume systems should have higher values. A higher value uses less system resources. When the value specified is reached. A lower value provides more timely reaction to error conditions. It is normal for some pending activity entries to undergo delay retry processing—for example. Parameters for automatic retry processing: MIMIX may use delay retry cycles when performing system journal replication to automatically retry processing an object that failed due to a locking condition or an in-use condition. 214 . First retry delay interval (RTYDLYITV1) This parameter specifies the amount of time.Verify interval (VFYITV) This parameter allows you to specify the number of journal transactions (entries) to process before MIMIX performs additional processing. or through the system journal. in seconds. Journal at creation (JRNATCRT) This parameter specifies whether to start journaling on new objects of type *FILE. and *DTAQ when they are created. The preferred methods of replicating data areas require that data group object entries be used to identify data areas. when a conflict occurs between replicated objects in MIMIX and another job on the system. including those not replicated by the data group. Second retry delay interval (RTYDLYITV2) specifies the amount of time. in seconds. For additional information. the value specified in them for cooperative processing (COOPDB) determines whether the data areas are processed through the user journal with advanced journaling. This value also affects how often the status is updated with the "Last read" entries. The following parameters define the scope of two retry cycles: Number of times to retry (RTYNBR) This parameter specifies the number of attempts to make during a delay retry cycle. to wait before retrying a process in the second (long) delay retry cycle. The poller process is only used when configured data group data area entries exist. only allow one data group to use journal at object creation (*YES or *DFT). If multiple data groups include the same library in their configurations. Note: There are some IBM library restrictions identified within the requirements for implicit starting of journaling described in “What objects need to be journaled” on page 294. *DTAARA. The default for this parameter is *DFT which allows MIMIX to determine the objects to journal at creation. The decision to start journaling for a new object is based on whether the data group is configured to cooperatively process any object of that type in a library. Data area polling interval (DTAARAITV) This parameter specifies the number of seconds that the data area poller waits between checks for changes to data areas. This is only used after all the retries for the RTYDLYITV1 parameter have been attempted. Larger. All new objects of the same type are journaled. MIMIX verifies that the communications path between the source system and the target system is still active and that the send and receive processes are successfully processing transactions. to wait before retrying a process in the first (short) delay retry cycle. see “Processing of newly created files and objects” on page 114. When object entries identify data areas. Adaptive caching is a technique by which MIMIX caches data into memory before it is needed by user journal replication processes. If the object identified by the entry is in use (*INUSE) after the first and second retry cycle attempts have been exhausted. See “Additional considerations for data groups” on page 220 for more information. If the object cannot be saved after all attempts in the first cycle. After all attempts have been performed. Omit open/close entries This option allows you to specify whether open and close entries are omitted from the journal. if the object still cannot be processed because of contention with other jobs. MIMIX uses the number of seconds specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the save attempt for the specified number of times (RTYNBR). The options are as follows: • Journal image This option allows you to control the kinds of record images that are written to the journal when data updates are made to database file records. *BOTH is also required for the IBM RMVJRNCHG (Remove Journal Change) command. The values in effect for the Number of third delay/retries policy and the Third retry interval (min. All database file entries. The default value *AFTER causes only after-images to be written to the journal. data areas or data queues. a third retry cycle is attempted if the Automatic object recovery policy is enabled. If you specify *NO. File and tracking entry options (FEOPT) This parameter specifies default options that determine how MIMIX handles file entries and tracking entries for the data group. Replication type This option allows you to specify the type of replication to use for database files defined to the data group. Using adaptive caching provides greater elapsed time performance by using additional memory. The default value *YES indicates that open and close operations on file members or IFS tracking entries defined to the data group do not create open and close journal entries and are therefore omitted from the journal. In the second retry cycle. and IFS tracking entries defined to the data group use these options unless they are explicitly overridden by values specified in data group file or object entries. such as keyed replication. IFS stream files. Some database techniques. object tracking entries. The default value *POSITION indicates that each file is replicated based on the position of the record within the file. Positional replication uses the values of the relative record number (RRN) found in the journal entry header to locate a database record that is being updated or • • 215 .) policy determine the scope of the third retry cycle.Tips for data group parameters After the initial failed save attempt. MIMIX delays for the number of seconds specified for the First retry delay interval (RTYDLYITV1) before retrying the save operation. File entry options in data group object entries enable you to set values for files and tracking entries that are cooperatively processed. the status of the entry will be changed to *FAILED. This is repeated for the specified number of times (RTYNBR). Adaptive cache (ADPCHE) This parameter enables adaptive caching for a data group. may require the use of both before-image and after-images. MIMIX enters the second retry cycle. The value *BOTH causes both before-images and after-images to be written to the journal. journal entries are created for open and close operations and are placed in the journal. only apply session A is valid. A collision resolution class allows you to specify how to handle a variety of collision types. Apply session With this option. MIMIX Dynamic Apply requires the value *POSITION. The value of the key is used to locate a database record that is being deleted or updated. The value *AUTOSYNC indicates that MIMIX will attempt to automatically synchronize the source and target file. Files defined using keyed replication must have at least one unique access path defined. For additional information see “Database apply session balancing” on page 81. The default value *HLDERR indicates that a file is put on hold if a collision is detected. The value *KEYED indicates that each file is replicated based on the value of the primary key defined to the database file. you can assign a specific apply session for processing files defined to the data group. When the backlog reaches any of the specified criterion. • Lock member during apply This option allows you to choose whether you want the database apply process to lock file members when they are being updated during the apply process. • • • Collision resolution This option determines how data collisions are resolved. If the data group is configured for MIMIX source-send processing instead of remote journaling. Members are locked only when the apply process is active. The default value *YES indicates that journal entries generated by triggers should be processed. Notes: • Any changes made to the apply session option are not effective until the data group is started with *YES specified for the clear pending and clear error parameters. The default value *YES indicates that triggers should be disabled by the database apply process while the file is opened. The default value *ANY indicates that MIMIX determines which apply session to use and performs load balancing. Process trigger entries This option determines if MIMIX should process any journal entries that are generated by triggers. MIMIX strongly recommends that any file configured for keyed replication also be enabled for both beforeimage and after-image journaling. see “Keyed replication” on page 322. • Database reader/send threshold (DBRDRTHLD) This parameter specifies the backlog threshold criteria for the database reader (DBRDR) process. This prevents inadvertent updates on the target system that can cause synchronization errors. See the online help for the Create Collision Resolution Class (CRTCRCLS) command for more information. For IFS and object tracking entries. For additional information.deleted. • Disable triggers during apply This option determines if MIMIX should disable any triggers on physical files during the database apply process. You can also specify the name of the collision resolution class (CRCLS) to use. Note: The *AUTOSYNC value should not be used if the Automatic database recovery policy is enabled. including calling exit programs to handle them. the threshold exceeded condition is indicated in the status of the DBRDR process. this threshold applies to 216 . The threshold can be specified as time. Apply history log spaces You can specify the maximum number of history log spaces that are kept after the journal entries are applied. • • • • • Object processing (OBJPRC) This parameter allows you to specify defaults for object replication. DLO transmission method You can specify the method used to transmit the DLO content and attributes to the target system. Keep journal log user spaces You can specify the maximum number of journal log spaces to retain after the journal entries are applied. If *NONE is specified for a criterion. The areas for which you can specify defaults are as follows: • Force data interval You can specify the number of records that are processed before MIMIX forces the apply process information to disk from cache memory. A lower value reduces disk usage by the apply process. Once the limit specified is reached. the value is the number of journal entries that have not been read from the journal. When the threshold is reached. Only the number of user spaces you specify are kept. When a journal entry quantity is specified. that criterion is not considered when determining whether the backlog has reached the threshold. The *SAVRST uses IBM i save and restore commands. in minutes. A lower value provides easier recovery for major system failures. A higher value provides more efficient processing because MIMIX does not open and close files as often. Database apply processing (DBAPYPRC) This parameter allows you to specify defaults for operations associated with the database apply processes. Larger log spaces provide better performance. The value *OPTIMIZED uses IBM i APIs. Each configured apply session uses the values specified in this parameter. • 217 . Log user spaces are automatically deleted by MIMIX. A higher value provides for more efficient processing. Threshold warning You can specify the number of entries the apply process can have waiting to be applied before a warning message is sent. Size of log user spaces (MB) You can specify the size of each log space (in megabytes) in the log space chain. the threshold exceeded condition is indicated in the status of the database apply process and a message is sent to the primary and secondary message queues. Any value other than zero (0) affects performance of the apply processes. the value is the amount of time. or both. the apply process selectively closes one file before opening a new file. When time is specified. The product default uses QDFTOWN for the owner user profile. Maximum open members You can specify the maximum number of members (with journal transactions to be applied) that the apply process can have open at one time. The areas for which you can specify defaults are as follows: • Object default owner You can specify the name of the default owner for objects whose owning user profile does not exist on the target system.Tips for data group parameters the database send (DBSND) process. Log spaces are used as a staging area for journal entries before they are applied. between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. journal entries. The value *SAVRST uses IBM i save and restore commands. MIMIX does not perform any clean-up of these spooled files. • User profile status You can specify the user profile Status value for user profiles when they are replicated. The system object name is only preserved if the DLO is not being redirected during the replication process. Keep DLO system object name You can specify whether the DLO on the target system is created with the same system object name as the DLO on the source system. • • • Object send threshold (OBJSNDTHLD) This parameter specifies the backlog threshold criteria for the object send (OBJSND) process. between the timestamp of the last journal entry read by the process and the timestamp of the last journal entry in the journal. the replicated spooled files are deleted from the target system when they are deleted from the source system. that criterion is not considered when determining whether the backlog has reached the threshold.• IFS transmission method You can specify the method used to transmit IFS object content to the target system. The IBM i save and restore method guarantees that all attributes of an IFS object are replicated. Note: It is recommended that you use the *OPTIMIZED method of IFS transmission only in environments in which the high volume of IFS activity results in persistent replication backlogs. Object retrieval delay You can specify the amount of time. If you specify *NO. the value is the number of journal entries that have not been read from the journal. if 218 . This delay provides time for your applications to complete their access of the object before MIMIX begins packaging the object. in seconds. You must delete them manually when they are no longer needed. When time is specified. journal entries. If the DLO from the source system is being directed to a different name or folder on the target system. then the system object name will not be preserved. The value *OPTIMIZED uses IBM i APIs. During periods of peak activity. or both. The IFS optimization method does not currently replicate digital signatures or other attributes that have been added in recent IBM i releases. user profiles can then be enabled or disabled as needed as part of the switching process. in minutes. The threshold can be specified as time. If operations are switched to the backup system. to wait after an object is created or updated before MIMIX packages the object. When you specify *YES. When the backlog reaches any of the specified criterion. Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object retrieve requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. Keep deleted spooled files You can specify whether to retain replicated spooled files on the target system after they have been deleted from the source system. If *NONE is specified for a criterion. the replicated spooled files are retained on the target system after they are deleted from the source system. the threshold exceeded condition is indicated in the status of the OBJSND process. the value is the amount of time. When a journal entry quantity is specified. This allows you to replicate user profiles with the same status as the source system in either an enabled or disabled status for normal operations. The specified minimum number of jobs will be started when the data group is started. When the backlog is handled and activity returns to normal. Reorganize job description (RGZJOBD) This parameter. the threshold exceeded condition is indicated in the status of the object retrieve (OBJRTV) process. if the number of pending requests exceeds the backlog threshold. If the backlog reaches the warning message threshold. the extra jobs will automatically end. If *NONE is specified for the warning message threshold. the process status will not indicate that a backlog exists. the extra jobs will automatically end. up to the maximum. additional jobs. 219 . if the number of pending requests exceeds the backlog jobs threshold. up to the maximum. The product default uses MIMIXAPY in library MIMIXQGPL for the apply job description. allows you to specify the name and library of the job description used to submit reorganize jobs. You can also specify a threshold for warning message that indicates the number of pending requests waiting in the queue for processing before a warning message is sent.Tips for data group parameters the number of pending requests exceeds the backlog jobs threshold. Apply job description (APYJOBD) This parameter allows you to specify the name and library of the job description used to submit apply requests. If the backlog reaches the warning message threshold. Object apply processing (OBJAPYPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle object apply requests and the threshold at which the number of pending requests queued for processing triggers additional temporary jobs to be started. During periods of peak activity. When the backlog is handled and activity returns to normal. The product default uses MIMIXSND in library MIMIXQGPL for the send job description. The product default uses MIMIXRGZ in library MIMIXQGPL for the reorganize job description. the threshold exceeded condition is indicated in the status of the container send (CNRSND) process. additional jobs. The value *CURRENT indicates that the same user profile used by the job that is currently running is used for the submitted job. When the backlog is handled and activity returns to normal. Send job description (SNDJOBD) This parameter allows you to specify the name and library of the job description used to submit send jobs. the threshold exceeded condition is indicated in the status of the object apply process and a message is sent to the primary and secondary message queues. The specified minimum number of jobs will be started when the data group is started. are started to handle the extra work. used by database processing. Container send processing (CNRSNDPRC) This parameter allows you to specify the minimum and maximum number of jobs allowed to handle container send requests and the threshold at which the number of pending requests queued for processing causes additional temporary jobs to be started. The specified minimum number of jobs will be started when the data group is started. are started to handle the extra work. When the threshold is reached. up to the maximum. If *NONE is specified for the warning message threshold. additional jobs. the process status will not indicate that a backlog exists. The default value *JOBD indicates that the user profile named in the specified job description is used for the job being submitted. During periods of peak activity. the extra jobs will automatically terminate. User profile for submit job (SBMUSR) This parameter allows you to specify the name of the user profile used to submit jobs. are started to handle the extra work. In order to use this command. see “Removing journaled changes” in the Using MIMIX book. Recovery window (RCYWIN) Configuring a recovery window1 for a data group specifies the minimum amount of time. Additional considerations for data groups If unwanted changes are recorded to a journal but not realized until a later time. Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the MIMIX environment. in minutes. You should also disable auditing for any data group that has a configured recovery window. Consider the effect of the duration you specify on the data group's ability to meet your required RTO. Recovery windows and recovery points are supported with the MIMIX CDP™ feature. A recovery window introduces a delay in the specified processes to create a minimum time during which you can set a recovery point. that a recovery window is available and identifies the replication processes that permit a recovery window. 220 . see “Preventing audits from running” in the Using MIMIX book. For more information. which requires an additional access code. Results may also be affected if you specify a value that uses the job restart time in a system definition defined to the data group. you can backtrack to a time prior to when the changes were made by using the Remove Journal Changes (RMVJRNCHG) command provided by IBM. your configuration must meet certain criteria including specific values for some of the data group definition parameters. Changing the job restart time is considered an advanced technique. By its nature.Synchronize job description (SYNCJOBD) This parameter. You can change the time at which these jobs restart. Once a recovery point is set. 1. used by database processing. The product default uses MIMIXSYNC in library MIMIXQGPL for synchronization job description. they are suspended so that any corruption in the transactions after that point will not automatically be processed. For more information. a recovery window can affect the data group's recovery time objective (RTO). allows you to the name and library of the job description used to submit synchronize jobs. This is valid for any synchronize command that does not have JOBD parameter on the display. The source or target role of the system affects the results of the time you specify on a data group definition. When the processes reach the recovery point. you can react to anticipated problems and take action to prevent a corrupted object from reaching the target system. If you want a specific prefix to be used for jobs associated with the data group. c. select option 4 (Work with data group definitions) and press Enter. 3. Note: Data group names cannot be UPSMON or begin with the characters MM. From the MIMIX Configuration Menu. data areas. skip to Step 9. type a 1 (Create) next to the blank line at the top of the list area and press Enter. 5. For additional information see Table 12 in “Considerations for LF and PF files” on page 96. MIMIX will generate a prefix. Specify a valid three-part name at the Data group definition prompts. For the remaining prompts on the display. a.Creating a data group definition Creating a data group definition Shipped default values for the Create Data Group Definition (CRTDGDFN) command result in data groups configured for MIMIX Dynamic Apply. type 11 (Configuration menu) and press Enter b. MIMIX Dynamic Apply requires either *ALL or *DB. Verify that the value of the Primary transfer definition prompt is what you want. 4. e. change the values. These data groups use remote journaling as an integral part of the user journal replication processes. If you want MIMIX to have access to an alternative communications path. do the following: 1. From the Work with Data Group Definitions display. b. see “Tips for data group parameters” on page 210. Verify that the value shown is what you want and press Enter. h. g. specify a value for the Secondary transfer definition prompt. The Create Data Group Definition (CRTDGDFN) display appears. Press Enter. Verify that the value of the Allow to be switched prompt is what you want. c. Ensure that the value of the Data source prompt represents the system that you want to use as the source of data to be replicated. To access the appropriate command. Legacy cooperative processing and user journal replication of IFS objects. f. do the following: a. d. For information about command parameters. verify the values shown are what you want. 2. specify a value at the Short data group name prompt. The Journal on target prompt appears on the display. If you specified *OBJ for the Data group type. If necessary. From the From the MIMIX Basic Main Menu. Verify that the value of the Data group type prompt is what you need. Otherwise. Verify that the value of the Reader wait time (seconds) prompt is what you want. and data queues require *ALL. 221 . To create a data group. you should perform a save and restore operation on the target system prior to loading the data group file entries. you have specified *NO for both the Allow to be switched prompt and the Journal on target prompt. specify the number of apply sessions you want to use. enclosed in apostrophes. b. Note: *SEND is required for the IBM RMVJRNCHG (Remove Journal Change) command. You may need to use the Page Down key to see the prompts. 8. c. MIMIX creates a transfer definition and an RJ link. For new data groups. You will synchronize as part of the configuration checklist. The value *USRJRN processes through the user (database) journal while the value *SYSJRN processes through the system (audit) journal. change the value to *NO. If you have an existing journaling environment that you have identified to MIMIX in a journal definition. 6. See “Additional considerations for data groups” on page 220 for more information. To create a data group definition for a source-send configuration. More prompts appear on the display that identify journaling information for the data group. Verify that the values shown for the DB journal entry processing prompts are what you want. 9. At the Cooperative journal (COOPJRN) prompt. which required for MIMIX Dynamic Apply and preferred for other configurations. it is not necessary to perform a save and restore operation. specify values for System1 ASP group and System 2 ASP group as needed. d. 10. The journal definition prompt that appears is for the source system as specified in the Data source prompt. specify the journal for cooperative operations. If any objects to replicate are located in an auxiliary storage pool (ASP) group on either system. the value *DFT automatically resolves to *USRJRN when Data group type is *ALL or *DB and Remote journal link is *YES. type a text description of the data group definition. 7.Note: If you specify *YES and you require that the status of journaling on the target system is accurate. the value *DGDFN is appropriate. Do one of the following: 222 . Ensure that the values of System 1 journal definition and System 2 journal definition identify the journal definitions you need. • If you only see one of the journal definition prompts. Notes: • If you have not journaled before. The default for the Use remote journal link prompt is *YES. specify the name of the journal definition. If you are performing your initial configuration. At the Number of DB apply sessions prompt. if needed. however. The ASP group name is the name of the primary ASP device within the ASP group. At the Description prompt. Do the following: a. and data queues that are configured for user journal replication. opts (FEOPT) parameter. To access prompts for advanced configuration. The journal image value *BOTH is required for the IBM RMVJRNCHG (Remove Journal Change) command. Specify the values you need for each of the prompts on the File and tracking ent. Press Enter. Specify the values you need for the following prompts associated with user journal replication: • • • • • • Remote journaling threshold Synchronization check interval Time stamp interval Verify interval Data area polling interval Journal at creation 12. Because IBM i does not allow additional parameters to be prompt-controlled. Most users can accept the default values for the remaining parameters. press F10 (Additional Parameters) and continue with the next step. For more information see “Database apply session balancing” on page 81. you will see all parameters regardless of the value specified for the Data group type prompt. Accept the value *YES for the Adaptive cache prompt unless the system is memory constrained. • 15. • Advanced Data Group Options: The remaining steps of this procedure are only necessary if you need to access options for advanced configuration topics. data areas. 14. Specify the values you need for each element of the following parameters: • • • Database reader/send threshold Database apply processing Object processing 223 . Apply session A is used for IFS objects.Creating a data group definition • To accept the basic data group configuration. Specify the values you need for the following prompts associated with system journal replication: • • • Number of times to retry First retry delay interval Second retry delay interval 13. See “Additional considerations for data groups” on page 220 for more information. The data group is created when you press Enter. 11. The prompts are listed in the order they appear on the display. Notes: • • Replication type must be *POSITION for MIMIX Dynamic Apply. 224 . change the values for the following prompts: • • • • • • User profile for submit job Send job description and its Library Apply job description and its Library Reorganize job description and its Library Synchronize job description and its Library Job restart time 17. If necessary. When you are sure that you have defined all of the values that you need. press Enter to create the data group definition.• • • • Object send threshold Object retrieve processing Container send processing Object apply processing 16. When a job has a backlog that reaches or exceeds the specified threshold. Fine-tuning backlog warning thresholds for a data group MIMIX supports the ability to set a backlog threshold on each of the replication jobs used by a data group. When you are ready to accept the changes. a threshold condition which occurs after starting a process that was temporarily ended or while processing an unusually large object which rarely changes may be an acceptable risk. Press Enter to see additional prompts. For example. 2. From the Work with DG Definitions display. Each threshold represents only one process in either the user journal replication path or the system journal replication path. The Change Data Group Definition (CHGDGDFN) display appears. the threshold condition is indicated in the job status and reflected in user interfaces. type a 2 (Change) next to the data group you want and press Enter. What is an acceptable risk for some data groups may not be acceptable for other data groups or in some environments. a process that is continuously in a threshold condition or having multiple processes frequently in threshold conditions may indicate a more serious exposure that requires attention. Page Down to see more of the prompts. press Enter. a condition exists that could become a problem.Changing a data group definition Changing a data group definition For information about command parameters. press F10 (Additional parameters). do the following: 1. each threshold setting must be a balance between allowing normal fluctuations to occur while ensuring that a job status is highlighted when a backlog approaches an unacceptable level of risk to your recovery time objectives (RTO) or risk of data loss. while normal replication processes are active. Make any changes you need for the values of the prompts. 4. If you need to access advanced functions. Consider the cumulative effect that having multiple processes in 225 . Threshold settings are meant to inform you that. 3. a backlog for that process may not result in a threshold condition while being sufficiently large to cause subsequent processes to have backlogs which exceed their thresholds. To change a data group definition. see “Tips for data group parameters” on page 210. Ultimately. Note: If you change the Number of DB apply sessions prompt (NBRDBAPY). you need to start the data group specifying *YES for the Clear pending prompt (CLRPND). Important! When evaluating whether threshold settings are compatible with your RTO. 5. If the threshold for one process is set higher than its shipped value. you must consider all of the processes in the replication paths for which the data group is configured and their thresholds. However. Make any changes you need for the values of the prompts. all journal entries in the database reader backlog are physically located on the target system but MIMIX has not started to replicate them. Option 3 Option 4 Database reader/send threshold 10 minutes Option 2 Option 3 Option 4 Database apply warning message threshold 100. Table 29 lists the shipped values for thresholds available in a data group definition.000 entries Option 2 Option 3 Option 4 226 . All of the entries in the database apply backlog are waiting to applied to the target system. these entries need to be applied before switching. you may need to use multiple options or adjust one or more threshold values multiple times before finding an appropriate setting. For each data group. and identifies available options to address a persistent threshold condition. The backlogged journal entries exist only in the source system and are at risk of being lost if the source system fails. For data groups that use MIMIX source-send processing. all journal entries in the database send backlog. journal analysis may be required. For data groups that use remote journaling. After the source system becomes available again. Shipped threshold values for replication processes and the risk associated with a backlog Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions Replication Process Backlog Threshold and its Shipped Default Values Note: Select a name to view a description Remote journaling threshold 10 minutes All journal entries in the backlog for the remote journaling function exist only in the source system journal and are waiting to be transmitted to the remote journal.threshold conditions would have on RTO and your tolerance for data loss in the event of a failure. After the source system becomes available again. If the source system fails. identifies the risk associated with a backlog for each replication process. journal analysis may be required. If the source system fails. A large backlog can also affect performance. These entries cannot be processed by MIMIX user journal replication processes and are at risk of being lost if the source system fails. these entries need to be read and applied before switching. Table 29. are waiting to be read and to be transmitted to the target system. Fine-tuning backlog warning thresholds for a data group Table 29. subsequent processes may have backlogs as replication progresses. All of the packaged objects associated with journal entries in the container send backlog are waiting to be sent to the target system. and a warning message threshold. If the source system fails. As this backlog clears. the process status reflects the threshold condition. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. This option is available only for the object retrieve. listed in order of preference. As this backlog clears. Shipped threshold values for replication processes and the risk associated with a backlog Risk Associated with a Backlog Options for Resolving Persistent Threshold Conditions Option 2 Option 3 Option 4 Replication Process Backlog Threshold and its Shipped Default Values Object send threshold 10 minutes All of the journal entries in the object send backlog exist only in the system journal on the source system and are at risk of being lost if the source system fails. If the number of entries in a backlog divided by the number of active jobs exceeds the job threshold. Each of these processes have a configurable minimum and maximum number of jobs. The latest changes to these objects exist only in the source system and are at risk of being lost if the source system fails. If the backlog reaches the higher value specified in the warning message threshold. container send. subsequent processes may have backlogs as replication progresses All of the entries in the object apply backlog are waiting to be applied to the target system. the 227 . these entries need to be applied before switching. extra jobs are automatically started in an attempt to address the backlog. Object retrieve warning message threshold 100 entries Option 1 Option 2 Option 3 Option 4 Container send warning message threshold 100 entries Option 1 Option 2 Option 3 Option 4 Object apply warning message threshold 100 requests Option 1 Option 2 Option 3 Option 4 The following options are available. All of the objects associated with journal entries in the object retrieve backlog are waiting to be packaged so they can be sent to the target system. Any related objects for which an automatic recovery action was collecting data may be lost. MIMIX may not have determined all of the information necessary to replicate the objects associated with the journal entries. Option 1 . If the process frequently shows a threshold status. and object apply processes. a threshold at which more jobs are started. As this backlog clears. Some options are not available for all thresholds. subsequent processes may have backlogs as replication progresses.Adjust the number of available jobs. The changes do not persist if the job is ended manually or by nightly cleanup operations resulting from the configured job restart time (RESTARTTIME) on the data group definition. if the quantity of entries is more of a concern than time. some processes support specifying additional threshold criteria not used by shipped default settings.Change threshold values or add criterion. Changes to threshold values are effective the next time the process status is requested. It may be necessary to change configurations to adjust what is defined to each data group or to make permanent work management changes for specific jobs.maximum number of jobs may be too low or the job threshold value may be too high. Option 4 . or in conjunction with a time value. contact your Certified MIMIX Consultant for assistance. In addition. If both time and entries are specified. the first criterion reached will trigger the threshold condition. 228 . you can adjust the threshold so that a number of journal entries is used as criteria instead of. For the remote journal.Temporarily increase job performance.Get assistance. Use work management functions to increase the resources available to a job by increasing its run priority or its timeslice (CHGJOB command). If you tried the other options and threshold conditions persist. and object send processes. All processes support changing the threshold value. Option 2 . Adjusting either value in the data group configuration can result in more throughput. database reader (or database send). This option is available for all processes except the RJ link. These changes are effective only for the current instance of the job. Option 3 . journal definition.Copying a definition CHAPTER 11 Additional options: working with definitions The procedures for performing common functions. journal definition. transfer definition. If you specify *YES for the Replace definition prompt. journal definition. the To journal defining prompt must exist. Notes for journal definitions: • The journal definition identified in the From journal definition prompt must exist before it can be copied. “Deleting a definition” on page 230 provides a procedure for deleting a system definition. or a data group definition. displaying. Specific procedures are included for renaming each type of definition and for swapping system definition names. or a data group definition. Before you copy a data group definition. transfer definition. journal definition. and printing definitions. The journal definition identified in the To journal defining prompt cannot exist when you specify *NO for the Replace definition prompt. Notes for data group definitions: • • The data group entries associated with a data group definition are not copied. or a data group definition. such as renaming a system definition which is typically done as a result in a change of software. It is possible to introduce conflicts in your configuration when replacing an existing journal definition. ensure that activity is ended for the definition to which you are copying. The topics in this chapter include: • • • • “Copying a definition” on page 229 provides a procedure for copying a system definition. such as copying. The generic procedures in this topic can be used for copying. transfer definition. “Renaming definitions” on page 232 provides procedure for renaming definitions. • • Copying a definition Use this procedure on a management system to copy a system definition. transfer definition. are very similar for all types of definitions used by MIMIX. “Displaying a definition” on page 231 provides a procedure for displaying a system definition. displaying. or a data group definition. deleting. and renaming. “Swapping system definition names” on page 238 provides a procedure to swap system definition names. journal definition. or a data group definition. “Printing a definition” on page 232 provides a procedure for creating a spooled file which you can print that identifies a system definition. transfer definition. These conflicts are automatically resolved • 229 . 4. and the journal receivers are not deleted. transfer definition. Type a 3 (Copy) next to definition you want and press Enter. 2. 7. specify *YES.Additional options: working with definitions or an error message is sent when the journal environment for the definition is built. To copy the definition. only the definition is deleted. information associated with the definition is also deleted. and data group definitions with all associated data group entries. Deleting a definition Use this procedure on a management system to delete a system definition. 6. To copy a definition. or a data group definition. 1. See “Accessing the 230 . the display has additional prompts. The Copy display for the definition type you selected appears. This includes journal definitions. all of its associated data group entries are also deleted. select the option for the type of definition you want and press Enter. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. If you are copying a journal definition or a data group definition. From the MIMIX Main Menu. do the following: Note: The following procedure includes using MIMIX menus. all other configuration elements associated with that definition are deleted. The delete function does not clean up any records for files in the error/hold file. If you delete a data group definition. 5. The "Work with" display for the definition type appears. transfer definitions. do the following: Note: The following procedure includes using MIMIX menus. press Enter. To delete a definition. The value *NO for the Replace definition prompt prevents you from replacing an existing definition. If you want to replace an existing definition. Verify that the values of prompts are what you want. journal definition. specify a name for the definition to which you are copying information. • • When you delete a journal definition. At the To definition prompt. From the MIMIX Configuration Menu. Attention: When you delete a system or data group definition. the journal. 3. Ensure that the definition you delete is not being used for replication and be aware of the following: • If you delete a system definition. select option 11 (Configuration menu) and press Enter. The files being journaled. c. on the Work with Systems display. select option 11 (Configuration menu) and press Enter. journal definition. use option 10 (End data group). Type a 5 (Display) next to definition you want and press Enter. transfer definition. 4. 1. 2. From the MIMIX Configuration Menu. 231 . The result is a list of data groups for the system you selected. 3. Before deleting a system definition. do the following: Note: The following procedure includes using MIMIX menus. A confirmation display appears with a list of definitions to be deleted. use option 10 (End journaling). From the MIMIX Configuration Menu. select option 11 (Configuration menu) and press Enter. e. The "Work with" display for the definition type appears. The "Work with" display for the definition type appears. Do the following: a. Page Down to see all of the values. d. The definition display appears. From the MIMIX Main Menu. or a data group definition. From the MIMIX Main Menu. 4. select the option for the type of definition you want and press Enter. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. 5. 1. 3. From the MIMIX Main Menu. On the Work with Data Groups display. Type a 4 (Delete) next to definition you want and press Enter. Displaying a definition Use this procedure to display a system definition. Ensure that the definition you want to delete is not being used for replication.Displaying a definition MIMIX Main Menu” on page 84 for information about using these. select option 2 (Work with systems) and press Enter. f. 2. If necessary. On the Work with DG File Entries display. Type a 17 (File entries) next to the data group you want and press Enter. uses option 10 (End managers). To display a definition. Type an 8 (Work with data groups) next to the system you want and press Enter. To delete the definitions press Enter. verify that the status of the file entries is *INACTIVE. select the option for the type of definition you want and press Enter. b. select the option for the type of definition you want and press Enter. transfer definition. journal definition. You can print the spooled file according to your standard print procedures. ensure that MIMIX activity is ended by using the End Data Group (ENDDG) and End MIMIX Manager (ENDMMXMGR) commands. This section includes the following procedures: • • • • “Renaming a system definition” on page 232 “Renaming a transfer definition” on page 235 “Renaming a journal definition with considerations for RJ link” on page 236 “Renaming a data group definition” on page 237 Renaming a system definition System definitions are typically renamed as a result of a change in hardware.Additional options: working with definitions Printing a definition Use this procedure to create a spooled file which you can print that identifies a system definition. data group definitions. or a data group definition. ensure that all other configuration elements related to it are not active. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. When you rename a system definition. This includes journal definitions. 1. The "Work with" display for the definition type appears. transfer definitions. or data group definition must be run from a management system. 4. Attention: Before you rename a system definition. 2. select option 11 (Configuration menu) and press Enter. transfer definition. do the following. Type a 6 (Print) next to definition you want and press Enter. all other configuration information that references the system definition is automatically modified to include the updated system name. 232 . 3. From the MIMIX Main Menu. A spooled file is created with a name of MX***DFN. To print a definition. Note: The following procedure includes using MIMIX menus. where *** indicates the type of definition. Attention: Before you rename any definition. journal definition. From the MIMIX Configuration Menu. Renaming definitions The procedures for renaming a system definition. and associated data group entries. do the following for each system whose definition you are renaming: from the management system unless noted otherwise: Note: The following procedure includes using MIMIX menus. select option 8 (Work with data groups) on the system whose definition you are renaming. From the Work with Data Groups display. Press F10 to access additional parameters. select option 2 (Work with transfer definitons) and press Enter. For all systems. ensure communications before continuing. From the Work with Systems Definitions (WRKSYSDFN) display type a 7 233 . From the MIMIX Configuration Menu. 8. Specify the System 1 host name or address and System 2 host name or address as the actual host names or IP addresses of the systems and press Enter. Otherwise. do the following: a. If changing the host name or IP address. use the host name or IP address specified in Step 6. 4. 6. b. 7. If you changed these. 9. Autostart entries must be reviewed for possible updates of a new system name or IP address. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. The Work with Transfer Definitions display appears. For each data group listed. 3.Renaming definitions To rename system definitions. Note: Many installations will have an autostart entry for the STRSVR command. Start the MIMIXSBS subsystem and the port jobs on all systems using the host names or IP addresses. see “Identifying the current autostart job entry information” on page 169 and “Changing an autostart job entry and its related job description” on page 169. Perform a controlled end of the MIMIX installation. From the MIMIX Intermediate Main Menu. b. select option 2 (Work with systems) and press Enter. 5. do the following steps. d. select option 11 (Configuration menu) and press Enter. The Change Transfer Definition (CHGTFRDFN) display appears. Record the Last Read Receiver name and Sequence # for both database and object. From the MIMIX Intermediate Main Menu. continue with Step 7. e. a. End the MIMIXSBS subsystem on all systems. Follow the steps in topic “Verifying all communications links” on page 174. select option 8 (Display status) and press Enter. 1. See the Using MIMIX book for procedures for ending MIMIX. See the Using MIMIX book for procedures for ending the MIMIXSBS subsystem. and press Enter. c. From the Work with Systems display. Select option 2 (Change) and press Enter. For more information. 2. At the Manager prompt. Press Enter. do the following: a. Press Enter. The Start MIMIX Managers (STRMMXMGR) display appears. Press F3 to return to the Work with Systems display. Type a 9 (Start) next to each network system you want and press Enter. The Start Data Group (STRDG) display appears. For each network system. that were recorded in Step 5b for both database and object. Press Enter. b. From the Work with Data Groups display. 21. Type a 9 (Start) next to the management system you want and press Enter. b. 14. 19. 12. Press Enter again to start the data groups. Refer to the Using MIMIX book for more information. press F12. 15. The Start Data Group (STRDG) display appears. For each data group listed. 20. Ensure all data groups are active. Press F10 to display additional parameters. From the Work with Systems display. 24. From the Work with Data Groups display. From the Work with Systems display. Do the following: a. The Start MIMIX Managers (STRMMXMGR) display appears. specify *ALL. select option 8 (Work with data groups) on the management system and press Enter. The Rename System Definitions (RNMSYSDFN) display appears. The Work with Systems display appears. 16. 13. b. select option 9 (Start DG) and press Enter. At the To system definition prompt. d. In the Reset configuration prompt. Press F12 again to return to the MIMIX Intermediate Main Menu. Press Enter. The Work with data groups display reappears. Press F10 to access additional parameters. Additional parameters are displayed. 18. c. specify *YES. 23. 10. do the following: a. select option 9 (Start DG) for data groups (highlighted red) that are not active and press Enter. specify the new name for the system whose definition is being renamed and press Enter. The Work with Systems display appears. 11. c. select option 8 (Work with data groups) on the system whose definition you have renamed and ensure all data groups are active.Additional options: working with definitions (Rename) next to the system whose definition is being renamed and press Enter. From the Work with Systems display. 22. adding 1 to the sequence #s. Type the Receiver names and Sequence #s. Select option 2 (Work with systems) and press Enter. Once this is complete. 234 . 17. select option 8 (Work with data groups) on the system whose definition you have renamed and press Enter. select option 2 (Work with transfer definitions) and press Enter. type a 2 (Change) next to the system name whose transfer definition needs to be changed and press Enter. To rename a transfer definition. From the Change Data Group Definition display. From the MIMIX Configuration Menu. All of the steps must be completed. From the MIMIX Configuration Menu.Renaming definitions Press F5 to refresh data. From the MIMIX Configuration Menu. Press F12 to return to the MIMIX Configuration Menu. type a 2 (Change) next to the data group name whose transfer definition needs to be changed and press Enter. The following procedure renames the transfer definition and includes steps to update the other configuration information that references the transfer definition including the system definition. The Rename Transfer Definition display for the definition type you selected appears. select option 8 (Work with remote journal links) and press Enter. From the Work with Transfer Definitions menu. Refer to the Using MIMIX book for more information. At the To transfer definition prompt. From the Work with System Definitions menu. 3. Renaming a transfer definition When you rename a transfer definition. Press F12 to return to the MIMIX Configuration Menu. 9. and remote journal link. 7. press F11 to display the transfer definitions. Press F12 to return to the MIMIX Configuration Menu. 235 . 4. You must manually update other information which references the transfer definition. other configuration information which references it is not updated with the new name. 2. From the Work with DG Definitions menu. 11. select option 11 (Configuration menu) and press Enter. 8. 13. specify the values you want for the new name and press Enter. select option 1 (Work with system definitions) and press Enter. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. specify the new name for the transfer definition and press Enter until the Work with DG Definitions display appears. 12. data group definition. From the MIMIX Configuration Menu. 1. 14. 15. 10. select option 4 (Work with data group definitions) and press Enter. 6. From the Change System Definition display. do the following from the management system: Note: The following procedure includes using MIMIX menus. 5. From the Work with RJ Links menu. specify the new name for the transfer definition and press Enter. From the MIMIX Intermediate Main Menu. type a 7 (Rename) next to the definition you want to rename and press Enter. Perform a controlled end for the data group in your remote journaling environment. other configuration information which references it is not updated with the new name. 2. At the To journal definition prompts. 1. If you do not want the journal name to be renamed. Press F12 to return to the MIMIX Configuration Menu. type a 15 (Remove RJ connection) next to the link that you want and press Enter. including considerations when an RJ link is used. The remote journal link should have a state value of *INACTIVE before you continue. From the MIMIX Configuration Menu. Verify that the remote journal link is not in use on both systems. e. From the Work with Journal Definitions menu. select option 11 (Configuration menu) and press Enter. Otherwise. The Rename Journal Definition display for the definition you selected appears. do the following from the management system: Note: The following procedure includes using MIMIX menus. you must specify the journal name rather than the default of *JRNDFN for the journal (JRN) parameter. Renaming a journal definition with considerations for RJ link When you rename a journal definition. If you rename a journal definition. select option 8 (Work with remote journal links) and press Enter. 5. To continue removing the connections for the selected links. End the remote journal link in a controlled manner. This procedure includes steps for renaming the journal definition in the data group definition. Use topic “Ending all replication in a controlled manner” in the Using MIMIX book. d. 3. select option 3 (Work with journal definitions) and press Enter. From the MIMIX Intermediate Main Menu. b. specify the new name for the transfer definition and press Enter. do the following. type a 7 (Rename) next to the journal definition names you want to rename and press Enter. Use topic “Ending a remote journal link independently” in the Using MIMIX book. 17. From the Change Remote Journal Link display. press Enter. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. 236 . c. specify the values you want for the new name. the journal name will also be renamed if you used the default value of *JRNDFN when configuring the journal definition. Remove the remote journal connection (the RJ link).Additional options: working with definitions 16. From the MIMIX Configuration Menu. To rename a journal definition. A confirmation display appears. Type a 2 (Change) next to the RJ link where you changed the transfer definition and press Enter. From the Work with RJ Links display. continue with Step 3: a. If using remote journaling. 4. f. Use topic “Displaying status of a remote journal link” in the Using MIMIX book. Press Enter. Otherwise. select option 4 (Work with data group definitions) and press Enter. From the MIMIX Configuration Menu. 1. 11. type a 2 (Change) next to the data group name that uses the journal definition you changed and press Enter. Specify the values entered in Step 5 and press Enter. ensure that there are no journal receiver prefixes in the specified library whose names start with the journal receiver prefix. 2. See “Building the journaling environment” on page 195 for more information. b. 10. 6. 12. ensure that the data group has a status of *INACTIVE. 13. do the following to change the corresponding definition for the remote journal. The Work with Journal Definitions display appears. select option 4 (Work with data group definitions) and press Enter. type a 14 (Build) next to the journal definition names you changed and press F4. You should see a message that indicates the journal environment was created. select option 11 (Configuration menu) and press Enter. From the MIMIX Intermediate Main Menu. The Build Journaling Environment display appears. From the Work with DG Definitions menu. 8. If the journal name is *JRNDFN. At the Source for values prompt. Type a 2 (Change) next to the corresponding remote journal definition name you changed and press Enter. 3. Press F12 to return to the MIMIX Configuration Menu. continue with Step 8: a. 14. 237 . Press F10 to access additional parameters. If using remote journaling. From the Work with Journal Definitions menu. end it using the procedure “Ending a data group in a controlled manner” in the Using MIMIX book. From the MIMIX Configuration Menu. Attention: Before you rename a data group definition. Press Enter. 9. If the data group is active. specify the new name for the System 1 journal definition and System 2 journal definition and press Enter twice. From the Work with DG Definitions menu. specify *JRNDFN.Renaming definitions a. Renaming a data group definition Do the following to rename a data group definition: Note: The following procedure includes using MIMIX menus. type a 7 (Rename) next to the data group name you want to rename and press Enter. 4. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. 7. Ensure that the data group is ended. From the Change Data Group Definition display. Ensure each step is successful before proceeding to the next step. Enter a temporary name for the network system (SYSTEMA) in the To system definition prompt. Port jobs must be running on both systems. The Start MIMIX Managers (STRMMXMGR) display appears. 4. Enter *YES for Reset configuration and press Enter. Attention: Before you swap system definition names. Refer to the following requirements before beginning this procedure: Requirements for swapping system definition names • • • • • This procedure must be run from the management system. ensure that MIMIX activity is ended by using the End Data Group (ENDDG) and End MIMIX Manager (ENDMMXMGR) commands. select option 1 (Work with system definitions) and press Enter. Press F12. Press Enter. 6. 10. To swap system definition names. 238 . Press F12 again to return to the MIMIX Intermediate Main Menu. 3. The Rename System Definitions (RNMSYSDFN) display appears. Record system definition names. 9. 2. including temporary names used for this procedure. From the Rename Data Group Definition display. 8. On the temporary system.Additional options: working with definitions 5. 7. See “Accessing the MIMIX Main Menu” on page 84 for information about using these. Swapping system definition names Use the procedure in this section to swap system definition names. Type a 7 (Rename) next to the network system definition (SYSTEMA) and press Enter. Select option 2 (Work with systems) and press Enter. select option 11 (Configuration menu) and press Enter. Use either the IP addresses or the actual host names in the transfer definition. The Work with Systems display appears. 1. do the following: Note: The following procedure includes using MIMIX menus. specify the new name for the data group definition and press Enter. The following procedure uses SYSTEMA for the network system definition and SYSTEMB for the management system definition. The Work with System Definitions (WRKSYSDFN) display appears. Press F10 to display additional parameters. From the MIMIX Intermediate Main Menu. From the MIMIX Configuration Menu. 5. select option 9 (Start) and press Enter. select option 11 (Configuration menu) and press Enter. 25. The Start MIMIX Managers (STRMMXMGR) display appears. From the MIMIX Configuration Menu. Type a 7 (Rename) next to the temporary network system definition and press Enter. 19. 21. 12. 13. 26. 29. 31. Select option 2 (Work with systems) and press Enter. On both systems. Press F12. Press F10 to display Additional parameters. 30. On both systems. Type a 7 (Rename) next to the management system definition (SYSTEMB) and press Enter. 28. 32. 23. 33. Press F10 to 239 . Press Enter. Enter the old network system definition name (SYSTEMA) in the To system definition prompt. select option 9 (Start) and press Enter. The Work with System Definitions (WRKSYSDFN) display appears. Press F12. select option 9 (Start) and press Enter. The Rename System Definitions (RNMSYSDFN) display appears. select option 11 (Configuration menu) and press Enter. Press F12 to return to the MIMIX Intermediate Main Menu. From the MIMIX Intermediate Main Menu. The Work with Systems display appears. select option 1 (Work with system definitions) and press Enter. Enter *YES for Reset configuration and press Enter for both systems. 17. From the MIMIX Configuration Menu. Select option 10 (End) for both systems and press Enter. 24. Press Enter. 22. Press F12 to return to the MIMIX Intermediate Main Menu. Press F12 again to return to the MIMIX Intermediate Main Menu. select option 1 (Work with system definitions) and press Enter. 16. Enter the old management system definition name (SYSTEMB) in the To system definition prompt. 15. The Rename System Definitions (RNMSYSDFN) display appears. 20. 27. 18. Press F12 to return to the MIMIX Intermediate Main Menu. The Start MIMIX Managers (STRMMXMGR) display appears. Ensure the systems are ended before proceeding. Select option 2 (Work with systems) and press Enter. From the MIMIX Intermediate Main Menu. From the Work with Systems display select option 10 (End) for both systems and press Enter. 14. The Work with Systems display appears.Swapping system definition names 11. The Work with System Definitions (WRKSYSDFN) display appears. 240 . 34.Additional options: working with definitions display Additional parameters. Enter *YES for Reset configuration and press Enter. The topics in this chapter include: • “Creating data group object entries” on page 242 describes data group object entries which are used to identify library-based objects for replication. Procedures for creating these are included. “Creating data group data area entries” on page 261 describes data group data area entries which identify data areas to be replicated by the data area poller process. Procedures for creating these are included. Procedures for creating these are included. “Creating data group file entries” on page 246 describes data group file entries which are required for user journal replication of *FILE objects. You can add individual data group entries. “Loading tracking entries” on page 257 describes how to manually load tracking entries for IFS objects. data areas. and data queues that are configured for user journal replication. “Additional options: working with DG entries” on page 263 provides procedures for performing data group entry common functions. “Creating data group IFS entries” on page 255 describes data group IFS entries which identify IFS objects for replication.CHAPTER 12 Configuring data group entries Data group entries can identify one or many objects to be replicated or excluded from replication. • • • • • • The appendix “Supported object types for system journal replication” on page 505 lists IBM i object types and indicates whether each object type is replicated by MIMIX. removing. Procedures for creating these are included. load entries from an existing source. such as copying. 241 . and change entries as needed. Procedures for creating these are included. and displaying. “Creating data group DLO entries” on page 259 describes data group DLO entries which identify document library objects (DLOs) for replication by MIMIX system journal replication processes. you can select multiple objects for which MIMIX will create appropriate data group object entries. Press F19 (Load). From the Work with Data Groups display. if necessary. For object types that can be journaled (*FILE. 3. The Work with DG Object Entries display appears. you can select from a list of data areas in the library to create exclude entries for the objects you do not want replicated. The custom load function can simplify creating data group entries. This function generates a list of objects that match your specified criteria.Creating data group object entries Data group object entries are used to identify library-based objects for replication. you can tailor them to meet your requirements. How replication is performed for the objects identified depends on the object type and configuration settings. For example. Loading data group object entries In this procedure. and *DTAQ). 242 . see the following topics: • • • “Identifying library-based objects for replication” on page 91 “Identifying logical and physical files for replication” on page 96 “Identifying data areas and data queues for replication” on page 103 When you configure MIMIX. From the management system. you could use the Add Data Group Object Entry (ADDDGOBJE) command to create a single data group object entry that includes all data areas in the library. from which you can selectively create data group object entries. You can customize individual entries later. some of which also require data group file entries to be configured. You can also use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group. you can create data group object entries by adding individual object entries or by using the custom load function for library-based objects. using the same object selection criteria with the custom load function. Once you have created data group object entries. From the list. From the MIMIX Intermediate Main Menu. values specified in the object entry and other configuration information determine whether the object is replicated through the system journal or is cooperatively processed with the user journal. several configuration options are available. if you want to replicate all but a few of the data areas in a specific library. system journal replication processes are used. For object types that cannot be journaled to a user journal. For *FILE objects. do the following to create a custom load of object entries: 1. *DTAARA. For detailed concepts and requirements for supported configurations. you specify selection criteria that results in a list of objects with similar characteristics. type a 20 (Object entries) next to the data group you want and press Enter. Then. 2. type a 1 (Work with data groups) and press Enter. From the management system. This includes after the nightly restart of MIMIX jobs. you may need to press Enter multiple times. Either type a 1 (Select) next to the objects you want or press F21 (Select all). b. Identify the library and objects to be considered. Press F9 (All parameters). If necessary. To generate the list of objects. and data queues will be replicated from the user journal. The entries will be available to replication processes after the data group is ended and restarted. e. data areas. and System 2 object prompts. IFS objects to user journaling” on page 136. c. type a 1 (Work with data groups) and press Enter. do the following to add a new data group object entry or change an existing entry: 1. If necessary. c. Specify values for the System 1 library and System 1 object prompts. press Enter. specify values for the Object type. d. At the Process type prompt. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. Then press Enter. Attribute. Adding or changing a data group object entry Note: If you are converting a data group to use user journal replication for data areas or data queues. To ensure that journaled files. Specify the values you need on the elements of the File entry options prompt. you must specify the object types. Note: If you skipped Step 5. 8. b. The Load DG Object Entries display appears with the list of objects that matched your selection criteria. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. From the MIMIX Intermediate Main Menu. do the following: a.Creating data group object entries 4. use this procedure when directed by “Checklist: Change *DTAARA. *DTAQ. System 2 library. 6. specify whether resulting data group object entries should include or exclude the identified objects. Press Page Down until you locate the File entry options prompt. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The Load DG Object Entries (LODDGOBJE) display appears. Do the following to specify the selection criteria: a. 7. 5. The entries will be available to MIMIX audits the next time an audit runs. Press Page Down to see all of the prompts. you can use “Adding or changing a data group object entry” on page 243 to customize values for any of the data group object entries. To specify file entry options that will override those set in the data group definition. 243 . System 2 object. or data queues will be replicated from the user journal. type a 2 (Change) next to the entry you want and press Enter. If necessary. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. The entries will be available to replication processes after the data group is ended and restarted. 9. To specify file entry options that will override those set in the data group definition. and Object auditing value prompts. 8. do the following: a. 6. If necessary. If necessary. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. System 2 library. 3. make sure that you specify only the objects you want to enable for the System 1 object prompt. data areas. you must specify values for the System 1 library and System 1 object prompts. Specify the values you need on the elements of the File entry options prompt. 11. type a 1 (Add) next to the blank line at the top of the list and press Enter. 5. return to Step 7 in procedure “Checklist: Change *DTAARA. To change an existing entry. 10. When adding an entry. specify values for the Attribute. specify a value for the Object type prompt. Note: When changing an existing object entry to enable replication of data areas or data queues from a user journal (COOPDB(*YES)). b.2. At the Process type prompt. Specify appropriate values for the Cooperate with database and Cooperating object types prompts. 13. Do one of the following: • • To add a new entry. Press Page Down to locate the File entry options prompt. Press Page Down to see more prompts. specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects. This includes after the nightly 244 . Note: To ensure that journaled files. IFS objects to user journaling” on page 136 to complete additional steps necessary to complete the conversion. From the Work with Data Groups display. Otherwise. For object entries configured for user journal replication of data areas or data queues. 4. *DTAQ. The appropriate Data Group Object Entry display appears. type a 20 (Object entries) next to the data group you want and press Enter. all objects in the library specified for System 1 library will be enabled. The Work with DG Object Entries display appears. you must specify *YES for Cooperate with database and you must specify the appropriate object types for Cooperating object types. Press F9 (All parameters). Press Enter. 12. 7. Creating data group object entries restart of MIMIX jobs. 245 . The entries will be available to MIMIX audits the next time an audit runs. When you configure MIMIX. as determined by the specified for System 1 library (LIB1). *JRNDFN . • • • When loading from a data group. Any file entry option with a value of *DFT is loaded from the specified source. you can have MIMIX create the entries for you using the Load Data Group File Entries (LODDGFE) command. The Configuration source (CFGSRC) parameter supports loading from a variety of sources. you can also specify the source from which file entry options are loaded. Note: If you plan to use either MIMIX Dynamic Apply or legacy cooperative processing.File entry information is loaded from a journal specified in the journal definition associated with the specified data group.File entry information is loaded from the information in data group object entries configured for the data group. File entries will be created for all files currently journaled to the journal specified in the journal definition. and Load from system (LODSYS) parameters. see the following topics: • • “Identifying library-based objects for replication” on page 91 “Identifying logical and physical files for replication” on page 96 Loading file entries If you need to create data group file entries for many files. you can tailor them to meet your requirements. This option supports loading from version 5 and version 6 data groups on the same system. System 2 library (LIB2). This value is typically used when loading file entries from a data group in a different installation of MIMIX. 246 . If you are configuring to use MIMIX Dynamic Apply or legacy cooperative processing. Once you have created the file entries. The Default FE options source (FEOPTSRC) parameter determines whether file entry options are loaded from the specified configuration source (*CFGSRC) or from the data group definition (*DGDFT). *DGFE . It is strongly recommended that you create data group object entries first. this value is recommended. Then. load the data group file entries from the object entry information defined for the files. and override elements if needed. For detailed concepts and requirements for supported configurations.File entry information is loaded from data group file entries defined to another data group.Creating data group file entries Data group file entries are required for user journal replication of *FILE objects. listed below in order most commonly used: • *DGOBJE . *NONE .File entry information is loaded from a library on either the source or target system. you can create data group file entry information by creating data group file entries individually or by loading entries from another source. You can use the #DGFE audit or the Check Data Group File Entries (CHKDGFE) command to ensure that the correct file entries exist for the object entries configured for the specified data group. Any values specified on elements of the File entry options (FEOPT) parameter override the values loaded from the FEOPTSRC parameter for all data group file entries created by a load request. files must be defined by both data group object entries and data group file entries. The data group definition specifies *SYS1 as its data source (DTASRC). Since all files identified by object entries are wanted.Load from the same data group This example illustrates how to create file entries when converting a data group to use MIMIX Dynamic Apply. with file entry options loaded from multiple sources.Creating data group file entries Regardless of where the configuration source and file entry option source are located. data group DGDFN1 is being converted.Load from another data group with mixed sources for file entry options The file entries for data group DGDFN1 are created by loading from the object entries for data group DGDFN2. Loading file entries from a data group’s object entries This topic contains examples and a procedure. LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) UPDOPT(*ADD) LODSYS(*SYS2) SELECT(*NO) Since no value was specified for FROMDGDFN. In order to accurately determine whether files are being journaled to the target system. Note: The Load Data Group File Entries (LODDGFE) command performs a journal verification check on the file entries using the Verify Journal File Entries (VFYJRNFE) command. Entries are added (UPDOPT(*ADD) to the existing configuration. Because the command specified the first element (Journal image) and third element (Replication type) of the file entry options (FEOPT) as *CFGSRC. you should first perform a save and restore operation to synchronize the files to the target system before loading the data group file entries. its default value *DGDFN causes the file entries to load from existing object entries for DGDFN1. The rest of the file entry options are loaded from the configuration source (object entries for DGDFN2). Procedure: Use this procedure to create data group file entries from the object entries defined to a data group. with file entry options coming from multiple sources. The examples illustrate the flexibility available for loading file entry options. In this example. these elements are loaded from the data group definition. file entries will be loaded from the target system to take advantage of a known synchronization point at which replication will later be started. in this example. the resulting file entries have the same values for those elements as the data group object entries for DGDFN2. LODDGFE DGDFN(DGDFN1) CFGSRC(*DGOBJE) FROMDGDFN(DGDFN2) FEOPT(*CFGSRC *DGDFT *CFGSRC *DGDFT) The data group file entries created for DGDFN1 are loaded from the configuration information in the object entries for DGDFN2. However. the Load Data Group File Entries (LODDGFE) command must be used from a system designated as a management system. The value *SYS2 for LODSYS causes this example configuration to load from its target system. Example . 247 . The data group file entries for DGDFN1 created have file entry options which match those found in the object entries because no values were specified for FEOPTSRC or FEOPT parameters. SELECT(*NO) bypasses the selection list. Example . Because the command specified the second element (Omit open/close entries) and the fourth element (Lock member during apply) as *DGDFT. (This value should be the same as the value specified for Data source in the data group definition. From the management system.) c. Press Enter. Press F10 (Additional parameters). Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 6. The Work with DG File Entries display appears. files should be loaded from the source system of the data group you are loading. 2. From the MIMIX Intermediate Main Menu.Note: The data group must be ended before using this procedure. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. From data group definition . Specify appropriate values. Load from system . type a 1 (Work with data groups) and press Enter. The name of the data group for which you are creating file entries and the Configuration source value of *DGOBJE are pre-selected. Specify values as needed for the elements of the File entry options prompts. Press F19 (Load). d. Each generated file entry includes all members of the file.If necessary. specify the three-part name of the data group. press Enter. The Load Data Group File Entries (LODDGFE) display appears.To load from entries defined to a different data group. 3. 9. you can use “Changing a data group file entry” on page 253 to customize values for any of the data group file entries. Do the following: a.Ensure that the value specified is appropriate. do the following: 1. you can specify a file entry option value to override those loaded from the configuration source. specify the value you want. From the Work with Data Groups display. All selected files identified from the configuration source are represented in the resulting file entries. 7. To create the file entries.Specify the source for loading values for default file entry options. For most environments. The following prompts appear on the display. b. 5. 6. Press Enter. 8. Default FE options source . Update option . Any values you specify will be used for all of the file entries created with this procedure. If necessary. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). type a 17 (File entries) next to the data group you want and press Enter. Configuration changes resulting from loading file entries are not effective until the data group is restarted. a. 4. 248 . b. Optionally. If necessary. Because there is no MIMIX configuration source specified. 5. Procedure: Use this procedure to create data group file entries from a library on either the source system or the target system. At the Configuration source prompt. Press F19 (Load). From the MIMIX Intermediate Main Menu. Press F10 (Additional parameters). 8. type a 1 (Work with data groups) and press Enter. the resulting data group file entries are created with a value of *DFT for all of the file entry options. LODDGFE DGDFN(DGDFN1) CFGSRC(*NONE) LIB1(TESTLIB) Since the FEOPT parameter was not specified. 249 . specify the values you want for the following: Update option prompt Add entry for each member prompt 7. From the Work with Data Groups display. do the following: a. 6. This example assumes the configuration is set up so that system 1 in the data group definition is the source for replication. b. The Work with DG File Entries display appears. specify *NONE and press Enter. From the management system. To optionally specify file entry options. 4. 2. Note: The data group must be ended before using this procedure. The value of the Default FE options source prompt is ignored when loading from a library. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. you can accomplish this by specifying a library name at the System 1 library prompt and accepting the default values for the System 2 library. Load from system. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. the value *DFT results in the file entry options specified in the data group definition being used. and File prompts. For common configurations. Configuration changes resulting from loading file entries are not effective until the data group is restarted. do the following: 1. Identify the location of the files to be used for loading. If you are using system 2 as the data source for replication or if you want the library name to be different on each system. Press Enter.Creating data group file entries Loading file entries from a library Example: The data group file entries are created by loading from a library named TESTLIB on the source system. Any values you specify will be used for all of the file entries created with this procedure. then you need to modify these values to appropriately reflect your data group defaults. 3. Specify values as needed for the elements of the File entry options prompts. type a 17 (File entries) next to the data group you want and press Enter. From the management system. From the MIMIX Intermediate Main Menu. Note: The data group must be ended before using this procedure. the value that corresponds to the source system of the data group you are loading should be used. For common configurations. 7. do the following: 1. Press F10 (Additional parameters). Procedure: Use this procedure to create data group file entries from the journal associated with a journal definition specified for the data group. The Work with DG File Entries display appears. To create the file entries. This example assumes the configuration is set up so that system 1 in the data group definition is the source for replication. 3. the value *DFT results in the file entry options specified in the data group definition being used.9. do the following: a. ensure that the value specified represents the appropriate system. From the Work with Data Groups display. 4. If necessary. At the Configuration source prompt. If necessary. the resulting data group file entries are created with a value of *DFT for all of the file entry options. Press F19 (Load). All selected files identified from the configuration source are represented in the resulting file entries. File and library names on the source and target systems are set to the same names for the load operation. The value of the Default FE options source prompt is ignored when loading from a journal definition. To optionally specify file entry options. At the Load from system prompt. (This value should match the value specified for Data source in the data group definition. type a 1 (Work with data groups) and press Enter. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. you can use “Changing a data group file entry” on page 253 to customize values for any of the data group file entries.) 6. specify *JRNDFN and press Enter. LODDGFE DGDFN(DGDFN1) CFGSRC(*JRNDFN) LODSYS(*SYS1) Since the FEOPT parameter was not specified. specify the value you want for the Update option prompt. 250 . Because there is no MIMIX configuration source specified. 5. Configuration changes resulting from loading file entries are not effective until the data group is restarted. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). press Enter. Loading file entries from a journal definition Example: The data group file entries are created by loading from the journal associated system 1 of the data group. The journal definition associated with the specified system is used for loading. The journal definition 1 specified in the data group definition identifies the journal. 10. 2. type a 17 (File entries) next to the data group you want and press Enter. the value *DFT results in file entry options which match those specified in DGDFN2. 4. the value *DFT results in file entry options which match those specified in DGDFN2 in installation MXTEST. Loading file entries from another data group’s file entries Example 1: The data group file entries are created by loading from file entries for another data group. DGDFN2 in another installation MXTEST. LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) PRDLIB(MXTEST) FROMDGDFN(DGDFN2) Since the FEOPT parameter was not specified. Each generated file entry includes all members of the file. Note: The data group must be ended before using this procedure. Any values you specify will be used for all of the file entries created with this procedure. 9. the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. Procedure: Use this procedure to create data group file entries from the file entries defined to another data group. Because the configuration source is another data group. If necessary. Configuration changes resulting from loading file entries are not effective until the data group is restarted. 8. Press F19 (Load). From the Work with Data Groups display. From the MIMIX Intermediate Main Menu. the resulting data group file entries for DGDFN1 are created with a value of *DFT for all of the file entry options. The Work with DG File Entries display appears. LODDGFE DGDFN(DGDFN1) CFGSRC(*DGFE) FROMDGDFN(DGDFN2) Since the FEOPT parameter was not specified. All selected files identified from the configuration source are represented in the resulting file entries. you can use “Changing a data group file entry” on page 253 to customize values for any of the data group file entries. Specify values as needed for the elements of the File entry options prompts. At the Configuration source prompt. 251 .Creating data group file entries b. specify *DGFE and press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source. type a 1 (Work with data groups) and press Enter. 3. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). Example 2: The data group file entries are created by loading from file entries for another data group. From the management system. press Enter. 2. Press Enter. type a 17 (File entries) next to the data group you want and press Enter. 10. Because the configuration source is another data group in another installation. The Load Data Group File Entries (LODDGFE) display appears with the name of the data group for which you are creating file entries. To create the file entries. DGDFN2. do the following: 1. 5. the configuration is dynamically updated and MIMIX automatically starts journaling of the file on the source system if the file exists and is not already journaled. 7. 4. Press Enter. If necessary. All selected files identified from the configuration source are represented in the resulting file entries. For each MIMIX process. At the System 1 File 252 . Special entries are inserted into the journal data stream to enable the dynamic update. type a 1 (Add) next to the blank line at the top of the list and press Enter. The LODDGFE Entry Selection List display appears with a list of the files identified by the specified configuration source 11. specify the three-part name of the data group from which you are loading. Each element in the file entry options is loaded from the specified location unless you explicitly specify a different value for an element in Step 9. The added data group file entry is recognized by MIMIX as soon as each active process receives the special entries. do the following specify a file entry option value to override those loaded from the configuration source: a. Use this procedure to add a data group file entry to a data group. do the following: 1. Each generated file entry includes all members of the file. Either type a 1 (Load) next to the files that you want or Press F21 (Select all). there may be a delay before the addition is recognized. If necessary. Adding a data group file entry When you add a single data group file entry to a data group definition. Specify the source for loading values for default file entry options at the Default FE options source prompt. either accept *CURRENT or specify the name of an installation library from which the data group you are copying is located. If necessary. At the Production library prompt. At the From data group definition prompts. type a 1 (Work with data groups) and press Enter. From the management system. you can use “Changing a data group file entry” on page 253 to customize values for any of the data group file entries. 9. press Enter. This is true especially for very active data groups. 2. The Add Data Group File Entry (ADDDGFE) display appears. 6. 12. Any values you specify will be used for all of the file entries created with this procedure. To create the file entries. Specify values as needed for the elements of the File entry options prompts. 8. Press F10 (Additional parameters). From the Work with Data Groups display. specify the value you want for the Update option prompt. b. From the Work with DG File Entries display. 10. From the MIMIX Intermediate Main Menu. type a 17 (File entries) next to the data group you want and press Enter. 3. 7. change the values as needed. Specify values as needed for the elements of the File entry options prompts. Any values you specify will be used for all of the file entries created with this procedure 8. 2. do not change the values specified for To system 1 file (TOFILE1) and To member (TOMBR1). then press Page Down. For data groups configured for multiple apply sessions. specify the apply session on the File entry options prompt. type a 17 (File entries) next to the data group you want and press Enter. From the MIMIX Intermediate Main Menu. Press F10 (Additional parameters). Type a 2 (Change) next to the entry you want and press Enter. specify its name at the Member prompt. type a 1 (Work with data groups) and press Enter. Changing a data group file entry Use this procedure to change an existing data group file entry. Locate the file entry you want on the Work with DG File Entries display. From the Work with Data Groups display. From the management system. all members in the file are replicated. Notes: • • If you change the value of the Dynamically update prompt to *NO.Creating data group file entries and Library prompts. 5. If you change the value of the Start journaling of file prompt to *NO and the file is not already journaled. do the following: 1. If you want to replicate only a specific member. Press Enter to create the data group file entry. 6. The Change Data Group File Entry (CHGDGFE) display appears. Notes: • If the file is currently being journaled and transactions are being applied. MIMIX will not be able to replicate changes until you start journaling the file. you can specify file entry options that will override those defined for the data group. If necessary. you need to end and restart the data group before the addition is recognized. Note: All replicated members of a file must be in the same database apply session. By default. Do the following: a. 253 . You can change any of the values shown on the display. Press F10 (Additional parameters) to see all available prompts. b. specify the file that you want to replicate. 3. 4. Optionally. Verify that the values of the remaining prompts on the display are what you want. See Step 7. 5. For data groups configured for multiple apply sessions. The replication processes do not recognize the change until the data group has been ended and restarted. press Enter. To accept your changes. specify the apply session on the File entry options prompt. 254 .• All replicated members of a file must be in the same database apply session. 2. Changes become effective after one of the following occurs: • • • The data group is ended and restarted Nightly maintenance routines end and restart MIMIX jobs A MIMIX audit that uses IFS entries to select objects to audit is started. Do one of the following: • • To add a new entry. do the following to add a new data group IFS entry or change an existing IFS entry: 1. Adding or changing a data group IFS entry Note: If you are converting a data group to use user journal replication for IFS objects. a name that is qualified with the name of the directory in which the object is located. The object name can be a simple name. as well as examples of the effect that multiple data group IFS entries have on object auditing values. To change an existing entry. 4. Notes: • The object name must begin with the '/' character and can be up to 512 characters in total length. type a 2 (Change) next to the entry you want and press Enter.Creating data group IFS entries Creating data group IFS entries Data group IFS entries identify IFS objects for replication. type a 22 (IFS entries) next to the data group you want and press Enter. From the Work with Data Groups display. use this procedure when directed by “Checklist: Change *DTAARA. From the MIMIX Intermediate Main Menu. such as /ABC*. From the management system. • 255 . you must specify a value for the System 1 object prompt. Topic “Identifying IFS objects for replication” on page 106 provides detailed concepts and identifies requirements for configuration variations for IFS objects. or a generic name that contains one or more characters followed by an asterisk (*). The Work with Data Group IFS Entries display appears. All objects in the specified path are selected. Supported file systems are included. The appropriate Data Group IFS Entry display appears. 3. When adding an entry. type a 1 (Work with data groups) and press Enter. The identified objects are replicated through the system journal unless the data group IFS entries are explicitly configured to allow the objects to be replicated through the user journal. Any component of the object name contained between two '/' characters cannot exceed 255 characters in length. *DTAQ. IFS objects to user journaling” on page 136. When changing an existing IFS entry to enable replication from a user journal (COOPDB(*YES)). type a 1 (Add) next to the blank line at the top of the display and press Enter. make sure that you specify only the IFS objects you want to enable. The entries will be available to replication processes after the data group is ended and restarted. specify whether resulting data group object entries should include (*INCLD) or exclude (*EXCLD) the identified objects. specify a value for the Object retrieval delay prompt. If necessary. This includes after the nightly restart of MIMIX jobs. 256 . IFS objects to user journaling” on page 136 to complete additional steps necessary to complete the conversion. The entries will be available to MIMIX audits the next time an audit runs. For IFS entries configured for user journal replication. 10. return to Step 7 in procedure “Checklist: Change *DTAARA. specify *YES. To replicate from the system journal. 8. If necessary. 7. Synchronize the objects identified by data group entries before starting replication processes or running MIMIX audits. At the Process type prompt.5. Press Page Down to see more prompts. 9. 11. specify *NO. Ensure that the remaining prompts contain the values you want for the data group object entries that will be created. To ensure that journaled IFS objects can be replicated from the user journal. 6. Press Enter to create the IFS entry. specify values for the System 2 object and Object auditing value prompts. Specify the appropriate value for the Cooperate with database prompt. *DTAQ. Loading IFS tracking entries After you have configured the data group IFS entries for advanced journaling. Ensure that the data group is ended. This procedure uses the Load DG IFS Tracking Entries (LODDGIFSTE) command. 8. If you will see additional prompts for Job description and Job name. At that prompts for Data group definition. 257 . tracking entries for any additional IFS objects. the request is processed. At the Submit to batch prompt. specify the value you want. if you add new or change existing data group IFS entries or object entries. data area. On a command line. 7. Configuration changes resulting from loading tracking entries are not effective until the data group is restarted. 3.Loading tracking entries Loading tracking entries Tracking entries are associated with the replication of IFS objects. do the following: 1. The Load DG IFS Tracking Entries (LODDGIFSTE) command appears. or data queue identified for replication. From the management system. Similarly. Similarly. 5. you must load tracking entries and start journaling for the objects which they identify. specify the three-part name of the data group for which you want to load IFS tracking entries. specify different values and press Enter. 6. Press Enter. IFS tracking entries identify existing IFS stream files on the source system that have been identified as eligible for replication with advanced journaling by the collection of data group IFS entries defined to a data group. specify a different value. If the data group is active. Verify that the value specified for the Load from system prompt is appropriate for your environment. type LODDGIFSTE and press F4 (Prompt). If necessary. specify a different value. 4. Note: The data group must be ended before using this procedure. When you initially configure a data group. or data queues must be loaded and journaling must be started on the objects which they identify. If you specified *NO for batch processing. If necessary. Default values for the command will load IFS tracking entries from objects on the system identified as the source for replication without duplicating existing IFS tracking entries. object tracking entries identify existing data areas and data queues on the source system that have been identified as eligible for replication using advanced journaling by the collection of data group object entries defined to a data group. 2. If necessary. use this procedure to load IFS tracking entries which match existing IFS objects. end it using the procedure “Ending a data group in a controlled manner” in the Using MIMIX book. Verify that the value specified for the Update option prompt is appropriate for your environment. data areas. data areas. A tracking entry must exist for each existing IFS object. and data queues with advanced journaling techniques. If you specified *NO for batch processing. Loading object tracking entries After you have configured the data group object entries for advanced journaling. use this procedure to load object tracking entries which match existing data areas and data queues. do the following: 1. specify the value you want. If necessary. Default values for the command will load object tracking entries from objects on the system identified as the source for replication without duplicating existing object tracking entries. If necessary. the request is processed. This procedure uses the Load DG Obj Tracking Entries (LODDGOBJTE) command. Configuration changes resulting from loading tracking entries are not effective until the data group is restarted. 7. 2. If you will see additional prompts for Job description and Job name. 6. specify the three-part name of the data group for which you want to load object tracking entries. end it using the procedure “Ending a data group in a controlled manner” in the Using MIMIX book. The Load DG Obj Tracking Entries (LODDGOBJTE) command appears. Note: The data group must be ended before using this procedure. specify different values and press Enter. Start journaling for the tracking entries when indicated by your configuration checklist. specify a different value. From the management system. Verify that the value specified for the Update option prompt is appropriate for your environment.9. Note: The command used in this procedure does not start journaling on the tracking entries. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group. You should receive message LVI3E2B indicating the number of tracking entries loaded for the data group. 4. On a command line. If the data group is active. Press Enter. type LODDGOBJTE and press F4 (Prompt). At the Submit to batch prompt. If necessary. 9. Verify that the value specified for the Load from system prompt is appropriate for your environment. 258 . 8. Note: The command used in this procedure does not start journaling on the tracking entries. 3. specify a different value. 5. Start journaling for the tracking entries when indicated by your configuration checklist. At that prompts for Data group definition. Ensure that the data group is ended. 1. Specify values for the System 1 folder and System 1 document prompts. or by creating individual DLO entries. you can specify information so that MIMIX will create the data group DLO entries for you. When you configure MIMIX. Loading DLO entries from a folder If you need to create data group DLO entries for a group of documents within a folder. see “Identifying DLOs for replication” on page 111. e. type a 21 (DLO entries) next to the data group you want and press Enter. and Object auditing value prompts. Do the following to specify the selection criteria: a. 2. Press F19 (Load). If necessary. Note: The MIMIXOWN user profile is automatically added to the system directory when MIMIX is installed. (You can customize individual entries later. you can tailor them to meet your requirements. 6. System 2 folder.) The user profile you use to perform this task must be enrolled in the system distribution directory on the management system. specify a value for the Object retrieval delay prompt. Additional prompts appear to optionally use batch processing and to load entries without load without selecting entries from a list. If necessary. type a 1 (Work with data groups) and press Enter. specify whether resulting data group DLO entries should include or exclude the identified documents d. System 2 object. b. Either type a 1 (Select) next to the documents you want or 259 . The Work with DG DLO Entries display appears. 4.Creating data group DLO entries Creating data group DLO entries Data group DLO entries identify document library objects (DLOs) for replication by MIMIX system journal replication processes. Press Enter. The Load DG DLO Entries (LODDGDLOE) display appears. do the following to create DLO entries by loading from a list. From the MIMIX Intermediate Main Menu. For detailed concepts and requirements. Press Enter. c. if necessary. 3. The Load DG DLO Entries display appears with the list of document that matched your selection criteria. Identify the documents to be considered. At the Process type prompt. This entry is required for DLO replication and should not be removed. specify values for the Owner. From the Work with Data Groups display. Once you have created the DLO entries. you can create data group DLO entries by loading from a generic entry and selecting from documents in the list. From the management system. 5. Identify the library and objects to be considered. At the Process type prompt. The Work with DG DLO Entries display appears. Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. type a 1 (Add) next to the blank line at the top of the list and press Enter. The entries will be available to MIMIX audits the next time an audit runs. 6. 5. This includes after the nightly restart of MIMIX jobs. 4. If necessary. specify a value for the Object retrieval delay prompt. If necessary. System 2 object. b. System 2 folder. Synchronize the DLOs identified by data group entries before starting replication processes or running MIMIX audits. do the following to add or change a DLO entry: 1. 7. type a 21 (DLO entries) next to the data group you want and press Enter. This includes after the nightly restart of MIMIX jobs. If you are adding a new DLO entry. Press Enter. To change an existing entry. type a 1 (Work with data groups) and press Enter. Then press Enter. 2. the Add Data Group DLO Entry display appears. From the MIMIX Intermediate Main Menu. Adding or changing a data group DLO entry The data group must be ended and restarted before any changes can become effective. Specify values for the System 1 folder and System 1 document prompts. type a 2 (Change) next to the entry you want and press Enter. Then skip to Step 5. Do one of the following: • • To add a new entry. If necessary. and Object auditing value prompts. Do the following: a. The entries will be available to replication processes after the data group is ended and restarted. you can use “Adding or changing a data group DLO entry” on page 260 to customize values for any of the data group DLO entries. specify whether resulting data group DLO entries should include or exclude the identified documents c. 260 . specify values for the Owner. From the Work with Data Groups display. 3. From the management system. The entries will be available to MIMIX audits the next time an audit runs.press F21 (Select all). The entries will be available to replication processes after the data group is ended and restarted. The preferred method of replicating data areas is with user journal replication processes using advanced journaling. The Load DG Data Area Entries (LODDGDAE) display appears. The data area entries can be created from libraries on either system. The next best method is identifying them with data group object entries for system journal replication processes. you can tailor them to meet your requirements by adding. changing. Verify that the remaining prompts on the display contain the values you want. 6. or deleting entries. change the values. do the following to load data area entries for use with the data area poller: 1. A completion message is sent when the load has 261 . 3. The Work with DG Data Area Entries display appears. To create the data group data area entries. Press F19 (Load). The values of the System 1 library and System 2 library prompts indicate the name of the library on the respective systems. type a 19 (Data area entries) next to the data group you want and press Enter. If necessary. type a 1 (Work with data groups) and press Enter. From the management system. Loading data area entries for a library Before any addition or change is recognized. 5. From the Work with Data Groups display. From the MIMIX Intermediate Main Menu. Ensure that the value of the Load from system prompt indicates the system from which you want to load data areas. Note: The data area poller method is not the preferred way to replicate data areas. 2. If the system manager is configured and running. Once the data group data area entries are created. You must define data group data area entries from the management system. For detailed concepts and requirements for supported configurations. all created and changed data group data area entries are sent to the network systems automatically.Creating data group data area entries Creating data group data area entries This procedure creates data group data area entries that identify data areas to be replicated by the data area poller process. see the following topics: • • “Identifying library-based objects for replication” on page 91 “Identifying data areas and data queues for replication” on page 103 You can load all data group data area entries from a library or you can add individual data area entries. 4. you need to end and restart the data group. If you submitted the job for batch processing. 7. press Enter. MIMIX sends a message indicating that a data areas load job has been submitted. Specify a name for the System 1 library prompt and verify that the value shown for the System 2 library prompt is what you want. type a 19 (Data area entries) next to the data group you want and press Enter.finished. From the Work with DG Data Area Entries display do one of the following • • To add a new data area entry. 262 . 2. To change an existing data area entry. 5. From the MIMIX Intermediate Main Menu. 4. 3. type a 2 (Change) next to the data group data area entry you want and press Enter. Specify the values you want at the prompts for System 1 data area and Library and System 2 data area and Library. The Change Data Group Data Area Entry display appears. From the management system. The Add Data Group Data Area Entry display appears. From the Work with Data Groups display. Adding or changing a data group data area entry Before any addition or change is recognized. do the following to add a new entry or change an existing data area entry for use with the data area poller: 1. you need to end and restart the data group. type a 1 (Add) at the blank line at the top of the list and press Enter. Press Enter to create the data area entry or accept the change. type a 1 (Work with data groups) and press Enter. provide: 263 . To File 1 To Member To File 2 To system 1 data area To system 2 data area System 1 library System 1 object Object type Attribute System 1 folder System 1 document Owner For file entries. such as copying. are very similar for all types of data group entries used by MIMIX. and displaying. provide: For data area entries. The values of these prompts define the data to be replicated by the definition to which you are copying. Specify a name for the To definition prompt. provide: For object entries. Ensure that the prompts identify the necessary information.Additional options: working with DG entries Additional options: working with DG entries The procedures for performing common functions. The "Work with" display for the entry you selected appears. do the following: 1. Additional prompts appear on display that are specific for the type of entry. Values to specify for each type of data group entry. Type a 3 (Copy) next to the entry you want and press Enter. From the Work with DG Definitions display. Any of these options will allow an entry to be copied: Option 17 (File entries) Option 19 (Data area entries) Option 20 (Object entries) Option 21 (DLO entries) Option 22 (IFS entries) 2. 4. 3. Copying a data group entry Use this procedure from the management system to copy a data group entry from one data group definition to another data group definition. Table 30. removing. type the option you want next to the data group from which you are copying and press Enter. provide: For DLO entries. Each generic procedure in this topic indicates the type of data group entry for which it can be used. The data group definition to which you are copying must exist. The Copy display for the entry appears. To copy a data group entry to another data group definition. it is best to use *YES. provide: 5. If you specify Dynamically update (*YES). do the following: 1. The change is recognized as soon as each active process receives the update. you do not need to end the processes for the data group when you use the default. When you specify Dynamically update (*NO). To system 1 object For IFS entries. Any of these options will allow an entry to be removed: Option 17 (File entries) Option 19 (Data area entries) Option 20 (Object entries) Option 21 (DLO entries) Option 22 (IFS entries) 2. receive.Table 30. This forces all currently held entries to be deleted. the remove function does not clean up any records in the error/hold log. Values to specify for each type of data group entry. and apply processes for the associated data group and ended and restarted. The "Work with" display for the entry you selected appears. type the option for the entry you want next to the data group and press Enter. Type a 4 (Remove) next to the entry you want and press Enter. specify *YES. all current entries to be ignored. Note: For all data group entries except file entries. Removing a data group entry Use this procedure from the management system to remove a data group entry from a data group definition. Data group file entries support dynamic removals if you prompt the RMVDGFE command and specify Dynamically update (*YES). 7. To copy the entry. and prevents additional entries from accumulating. If an entry is held when you delete it. If you want to replace an existing entry. If a file is on hold and you want to delete the data group file entry. press Enter. 264 . its information remains in the error/hold log. From the Work with DG Definitions display. the change is not recognized until after the send. Additional transactions for the file or member can be accumulating in the error/hold log or will be applied to the file. and apply processes for the associated data group are ended and restarted. end and restart the data group being copied. 6. the change is not recognized until after the send receive. For file entries. You may want to remove an entry when you no longer need to replicate the information that the entry identifies. To remove an entry. The value *NO for the Replace definition prompt prevents you from replacing an existing entry in the definition to which you are copying. If you accept the default of Dynamically update (*NO). Additional options: working with DG entries 3. 4. Page Down to see all of the values. Printing a data group entry Use this procedure to create a spooled file which you can print that identifies a system definition. type the option for the entry you want next to the data group and press Enter. Type a 6 (Print) next to the entry you want and press Enter. From the Work with DG Definitions display. press Enter. Displaying a data group entry Use this procedure to display a data group entry for a data group definition. A spooled file is created with a name of MXDG***E. From the Work with DG Definitions display. Specify the values you want and press Enter. For data group file entries. where *** is the type of entry. The appropriate data group entry display appears. You can print the spooled file according to your standard print procedures. 3. The "Work with" display for the entry you selected appears. To display a data group entry. type the option for the entry you want next to the data group and press Enter. transfer definition. journal definition. 265 . The "Work with" display for the entry you selected appears. A confirmation display appears with a list of entries to be deleted. do the following. Any of these options will allow an entry to be printed: Option 17 (File entries) Option 19 (Data area entries) Option 22 (IFS entries) 2. 1. a display with additional prompts appears. do the following: 1. To delete the entries. or a data group definition. 3. Not all types of entries support the print function. Type a 5 (Display) next to the entry you want and press Enter. Any of these options will allow an entry to be displayed: Option 17 (File entries) Option 19 (Data area entries) Option 20 (Object entries) Option 21 (DLO entries) Option 22 (IFS entries) 2. To print a data group entry. see “Interpreting results for configuration data . This topic describes the processing resulting from combinations of values specified for the • • • • • 266 . • • • “Changes to startup programs” on page 278 describes changes that you may need to make to your configuration to support remote journaling. “Checking file entry configuration manually” on page 276 provides a procedure using the CHKDGFE command to check the data group file entries defined to a data group. Always use the configuration checklists to guide you though the steps of standard configuration scenarios.#DGFE audit” on page 546.Additional supporting tasks for configuration CHAPTER 13 Additional supporting tasks for configuration The tasks in this chapter provide supplemental configuration tasks. “Using file identifiers (FIDs) for IFS objects” on page 284 describes the use of FID parameters on commands for IFS tracking entries. When IFS objects are configured for replication through the user journal. • • “Accessing the Configuration Menu” on page 268 describes how to access the menu of configuration options from a 5250 emulator. For additional information. “Starting data groups for the first time” on page 282 describes how to start replication once configuration is complete and the systems are synchronized. This topic also describes options for ensuring that systems in a MIMIX configuration have the same password and the implications of these options. Use this only when directed to by a configuration checklist. which calls the CHKDGFE command and can automatically correct detected problems. “Setting data group auditing values manually” on page 270 describes when to manually set the object auditing level for objects defined to MIMIX and provides a procedure for doing so. “Starting the system and journal managers” on page 269 provides procedures for starting these jobs. System and journal manager jobs must be running before replication can be started. Note: The preferred method of checking is to use MIMIX AutoGuard to automatically schedule the #DGFE audit. commands that support IFS tracking entries can specify a unique FID for the object on each system. “Identifying data groups that use an RJ link” on page 283 describes how to determine which data groups use a particular RJ link. “Starting the DDM TCP/IP server” on page 279 describes how to start this server that is required in configurations that use remote journaling. “Checking DDM password validation level in use” on page 280 describes how to check the whether the DDM communications infrastructure used by MIMIX Remote Journal support requires a password. MIMIX jobs restart daily to ensure that the MIMIX environment remains operational. 267 . • “Configuring restart times for MIMIX jobs” on page 285 describes how to change the time at which MIMIX jobs automatically restart.object and FID prompts. do the following: 1.Accessing the Configuration Menu The MIMIX Configuration Menu provides access to the options you need for configuring MIMIX. Access the MIMIX Basic Main Menu. 2. 268 . See “Accessing the MIMIX Main Menu” on page 84. select option 11 (Configuration menu) and press Enter. To access the MIMIX Configuration Menu. From the on the MIMIX Basic Main Menu. Press Enter to complete this request. Type a 9 (Start) next to each of the system definitions you want and press Enter. Do the following: a. The Work with Systems display appears with a list of the system definitions. journal managers. 5. cluster services. b. they will automatically send configuration information to the network system as you complete configuration tasks.Starting the system and journal managers Starting the system and journal managers If the system managers are running. journal managers. Repeat Step 5 for each system definition that you selected. and. To start all of the system managers. This will start all managers on all of these systems in the MIMIX environment. 2. From the MIMIX Basic Main Menu press the F21 key (Assistance level) to access the MIMIX Intermediate Main Menu. Verify that *ALL appears as the value for the Manager prompt. do the following: 1. The Start MIMIX Managers (STRMMXMGR) display appears. and cluster services must be active to start replication. and cluster services (for a cluster environment) during configuration. 269 . 3. 4. If you selected more than one system definition in Step 4. Select option 2 (Work with Systems) and press Enter. This procedure starts all the system managers. journal managers. if the system is participating in a cluster. The system managers. the Start MIMIX Managers (STRMMXMGR) display will be shown for each system definition that you selected. 6. See “Accessing the MIMIX Main Menu” on page 84. Access the MIMIX Basic Main Menu. Doing so will ensure that replicated objects will be properly audited and that any transactions for the objects that occur between configuration and starting the data group will be replicated. do the following on each system defined to the data group: 1. you can optionally force a change to a configured value that is lower than the existing value through the command’s Force audit value (FORCE) parameter. data group DLO entries. MIMIX does not change the value. Processing options .If you anticipate a delay between configuring data group entries and starting the data group. Specify the name of the data group you want. For more information see “Examples of changing of an IFS object’s auditing value” on page 271. The object auditing level of an existing object is set to the auditing value specified in the data group entry that most specifically matches the object. For example.MIMIX checks for existing objects identified by data group entries for the specified data group. you can use the Set Data Group Auditing (SETDGAUD) command. 270 . To manually set the system auditing level of replicated objects.Setting data group auditing values manually Default behavior for MIMIX is to change the auditing value of IFS. MIMIX will change the auditing value even if it is lower than the existing value. The Set Data Group Auditing (SETDGAUD) appears. • For IFS objects. DLO. or to force a change to a lower configured level. Default behavior is that MIMIX only changes an object’s auditing value if the configured value is higher than the object’s existing value. You can also use the SETDGAUD command to reset the object auditing level for all replicated objects if a user has changed the auditing level of one or more objects to a value other than what is specified in the data group entries. it is particularly important that you understand the ramifications of the value specified for the FORCE parameter. • The default value *NO for the FORCE parameter prevents MIMIX from reducing the auditing level of an object. If you specify *YES for the FORCE parameter. and librarybased objects configured for system journal replication as needed when starting data groups with the Start Data Group (STRDG) command. Procedure -To set the object auditing value for a data group. if the SETDGAUD command processes a data group entry with a configured object auditing value of *CHANGE and finds an object identified by that entry with an existing auditing value of *ALL. and data group IFS entries. you should use the SETDGAUD command before synchronizing data between systems. When to set object auditing values manually . The SETDGAUD command can be used for data groups configured for replicating object information (type *OBJ or *ALL). However. The SETDGAUD command allows you to set the object auditing level for all existing objects that are defined to MIMIX by data group object entries. 2. Type the command SETDGAUD and press F4 (Prompt). Examples of changing of an IFS object’s auditing value The following examples show the effect of the value of the FORCE parameter when manually changing the object auditing values of IFS objects configured for system journal replication. The auditing values resulting from the SETDGAUD command can be confusing when your environment has multiple data group IFS entries. 4. Because the change is to a lower auditing level. Table 31. Similarly. 5. At the Object type prompt. each with different auditing levels. Note: This may affect the operation of your replicated applications. If you want to allow MIMIX to force a change to a configured value that is lower than the object’s existing value. The following examples illustrate how these conditions affect the results of setting object auditing for IFS objects. Example 1: This scenario shows a simple implementation where data group IFS entries have been modified to have a configured value of *CHANGE from a previously configured value of *ALL. specify *YES for the Force audit value prompt. the change must be forced with the SETDGAUD command. The entries are listed in the order in which they are processed by the SETDGAUD command. IFS entries are processed using the unicode character set. running the SETDGAUD command with FORCE(*NO) does not change the auditing values for this scenario. all descendents of the IFS object may also have their auditing value changed. and more than one entry references objects sharing common parent directories. We recommend that you force auditing value changes only when you have specified *ALLIFS for the Object type. When MIMIX processes a data group IFS entry and changes the auditing level of objects which match the entry. In the case of an IFS entry with a generic name.Setting data group auditing values manually 3. Example 1 configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*CHANGE) OBJAUD(*CHANGE) Process type PRCTYPE(*EXCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD) Order processed 1 2 3 Simply ending and restarting the data group will not cause these configuration changes to be effective. 271 . Data group entries are processed in order from most generic to most specific. if necessary. changed to the new auditing value. Press Enter. The first entry (more generic) found that matches the object is used until a more specific match is found. specify the type of objects for which you want to set auditing values. all of the directories in the object’s directory path are checked and. Table 31 identifies a set of data group IFS entries and their configured auditing values. In this scenario there are multiple configured values. Running the command with FORCE(*YES) does change the existing objects’ values. Example 2: Table 33 identifies a set of data group IFS entries and their configured auditing values. Example 2 configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*NONE) OBJAUD(*ALL) Process type PRCTYPE(*INCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD) Order processed 1 2 3 For this scenario. Table 33. Intermediate audit values which occur during FORCE(*YES) processing for example 1. The entries are listed in the order in which they are processed by the SETDGAUD command. running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are the same or lower than the existing values. object auditing processing does not apply. The existing value is the same as the configured value of the third entry at the time it is processed. This object’s auditing value is evaluated when the third data group IFS entry is processed but the entry does not cause the value to change. Data group IFS entry #3 in Table 33 272 .Table 32 shows the intermediate and final results as each data group IFS entry is processed by the force request. Because the first data group IFS entry excludes objects from replication. 2. Table 34 shows the intermediate values as each entry is processed by the force request and the final results of the change. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry Note 1 Note 1 Note 1 Note 1 Note 1 *CHANGE *CHANGE Changed by 2nd entry *CHANGE Changed by 3rd entry Note 2 *CHANGE Final results of FORCE(*YES) *CHANGE *CHANGE *ALL *CHANGE *CHANGE Existing objects /DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF *ALL *ALL *ALL *ALL *ALL Notes: 1. Table 32. In this scenario. Intermediate audit values which occur during FORCE(*YES) processing for example 3. Intermediate audit values which occur during FORCE(*YES) processing for example 2. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE *CHANGE *CHANGE *CHANGE *NONE *NONE Changed by 2nd entry *NONE Changed by 3rd entry *ALL *ALL Final results of FORCE(*YES) *ALL *ALL *CHANGE *NONE *NONE Existing objects /DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF *ALL *ALL *ALL *ALL *ALL Example 3: This scenario illustrates why you may need to force the configured values to take effect after changing the existing data group IFS entries from *ALL to lower values. running the SETDGAUD command with FORCE(*NO) does not change the auditing values on any existing IFS objects because the configured values from the data group IFS entries are lower than the existing values. Table 36. Table 36 shows the intermediate values as each entry is processed by the force request and the final results of the change. Table 35 identifies a set of data group IFS entries and their configured auditing values. Table 35. SETDGAUD FORCE(*YES) must be run to have the configured auditing values take effect. Table 34. The entries are listed in the order in which they are processed by the SETDGAUD command. Example 3: configuration of data group IFS entries Specified object /DIR1/* /DIR1/DIR2/* /DIR1/STMF Object auditing value OBJAUD(*CHANGE) OBJAUD(*NONE) OBJAUD(*NONE) Process type PRCTYPE(*INCLD) PRCTYPE(*INCLD) PRCTYPE(*INCLD) Order processed 1 2 3 For this scenario.Setting data group auditing values manually prevents directory /DIR1 from having an auditing value of *CHANGE or *NONE because it is the last entry processed and it is the most specific entry. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE *CHANGE Changed by 2nd entry *NONE *NONE Changed by 3rd entry Final results of FORCE(*YES) *NONE *NONE *CHANGE Existing objects /DIR1 /DIR1/STMF /DIR1/STMF2 *ALL *ALL *ALL 273 . 274 . This scenario is quite possible as a result of a normal STRDG request. which may be undesirable. Existing value Auditing values while processing SETDGAUD FORCE(*YES) Changed by 1st entry *CHANGE *CHANGE Changed by 2nd entry *NONE *NONE Changed by 3rd entry Final results of FORCE(*YES) *NONE *NONE Existing objects /DIR1/DIR2 /DIR1/DIR2/STMF *ALL *ALL Example 4: This example begins with the same set of data group IFS entries used in example 3 (Table 35) and uses the results of the forced change in example 3 as the auditing values for the existing objects in Table 37.Table 36. Complex data group IFS entries and multiple configured values cause these potentially undesirable results. the objects’ auditing values will be set to those shown in Table 37 for FORCE(*NO). we recommend that you configure a consistent auditing value of *CHANGE across data group IFS entries which identify objects with common parent directories. Example 4: comparison of object’s actual values Auditing value Existing values /DIR1 /DIR1/STMF /DIR1/STMF2 /DIR1/DIR2 /DIR1/DIR2/STMF *NONE *NONE *CHANGE *NONE *NONE After SETDGAUD FORCE(*NO) *CHANGE *CHANGE *CHANGE *CHANGE *CHANGE After SETDGAUD FORCE(*YES) *NONE *NONE *CHANGE *NONE *NONE Existing objects There is no way to maintain the existing values in Table 37 without ensuring that a forced change occurs every time SETDGAUD is run. Table 37. Any addition or change to the data group IFS entries can potentially cause similar results the next time the data group is started. the next time data groups are started. Note: Any addition or change to the data group IFS entries can cause these results to occur. Intermediate audit values which occur during FORCE(*YES) processing for example 3. To avoid this situation. Table 37 shows how running the SETDGAUD command with FORCE(*NO) causes changes to auditing values. In this example. The value *USRPRF is not in the range of valid values for MIMIX. Example 5: comparison of object’s actual values Auditing value Existing values /DIR1/STMF *USRPRF After SETDGAUD FORCE(*NO) *USRPRF After SETDGAUD FORCE(*YES) *NONE Existing objects 275 . Therefore. Table 38 shows the configured data group IFS entry. Table 39. Running the command with FORCE(*YES) does force a change because the existing value and the configured value are not equal. Example 5 configuration of data group IFS entries Specified Object /DIR1/STMF Object auditing value OBJAUD(*NONE) Process type PRCTYPE(*INCLD) Order processed 1 Table 39 compares the results running the SETDGAUD command with FORCE(*NO) and FORCE(*YES).Setting data group auditing values manually Example 5: This scenario illustrates the results of SETDGAUD command when the object’s auditing value is determined by the user profile which accesses the object (value *USRPRF). Running the command with FORCE(*NO) does not change the value. an object with an auditing value of *USRPRF is not considered for change. Table 38. Checking file entry configuration manually The Check DG File Entries (CHKDGFE) command provides a means to detect whether the correct data group file entries exist with respect to the data group object entries configured for a specified data group in your MIMIX configuration. When file entries and object entries are not properly matched, your replication results can be affected. Note: The preferred method of checking is to use MIMIX AutoGuard to automatically schedule the #DGFE audit, which calls the CHKDGFE command and can automatically correct detected problems. For additional information, see “Interpreting results for configuration data - #DGFE audit” on page 546. To check your file entry configuration manually, do the following: 1. On a command line, type CHKDGFE and press Enter. The Check Data Group File Entries (CHKDGFE) command appears. 2. At the Data group definition prompts, select *ALL to check all data groups or specify the three-part name of the data group. 3. At the Options prompt, you can specify that the command be run with special options. The default, *NONE, uses no special options. If you do not want an error to be reported if a file specified in a data group file entry does not exist, specify *NOFILECHK. 4. At the Output prompt, specify where the output from the command should be sent—to print, to an outfile, or to both. See Step 6. 5. At the User data prompt, you can assign your own 10-character name to the spooled file or choose not to assign a name to the spooled file. The default, *CMD, uses the CHKDGFE command name to identify the spooled file. 6. At the File to receive output prompts, you can direct the output of the command to the name and library of a specific database file. If the database file does not exist, it will be created in the specified library with the name MXCDGFE. 7. At the Output member options prompts, you can direct the output of the command to the name of a specific database file member. You can also specify how to handle new records if the member already exists. Do the following: a. At the Member to receive output prompt, accept the default *FIRST to direct the output to the first member in the file. If it does not exist, a new member is created with the name of the file specified in Step 6. Otherwise, specify a member name. b. At the Replace or add records prompt, accept the default *REPLACE if you want to clear the existing records in the file member before adding new records. To add new records to the end of existing records in the file member, specify *ADD. 8. At the Submit to batch prompt, do one of the following: • If you do not want to submit the job for batch processing, specify *NO and press Enter to check data group file entries. 276 Checking file entry configuration manually • To submit the job for batch processing, accept *YES. Press Enter and continue with the next step. 9. At the Job description prompts, specify the name and library of the job description used to submit the batch request. Accept MXAUDIT to submit the request using the default job description, MXAUDIT. 10. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 11. To start the data group file entry check, press Enter. 277 Changes to startup programs If you use startup programs, ensure that you include the following operations when you configure for remote journaling: • • If you use TCP/IP as the communications protocol you need to start TCP/IP, including the DDM server, before starting replication. If you use OptiConnect as the communications protocol, the QSOC subsystem must be active. 278 Starting the DDM TCP/IP server Starting the DDM TCP/IP server Use this procedure if you need to start the DDM TCP/IP server in an environment configured for MIMIX RJ support. From the system on which you want to start the TCP server, do the following: 1. Ensure that the DDM TCP/IP attributes allow the DDM server to be automatically started when the TCP/IP server is started (STRTCP). Do the following: a. Type the command CHGDDMTCPA and press F4 (Prompt). b. Check the value of the Autostart server prompt. If the value is *YES, it is set appropriately. Otherwise, change the value to *YES and press Enter. 2. To prevent install problems due to locks on the library name, ensure that the MIMIX product library is not in your user library list. 3. To start the DDM server, type the command STRTCPSVR(*DDM) and press Enter. Verifying that the DDM TCP/IP server is running Do the following: 1. Enter the command NETSTAT OPTION(*CNN) 2. The Work with TCP/IP Connection Status appears. Look for these servers in the Local Port column: • • ddm ddm-ssl 3. These servers should exist and should have a vale of Listen in the State column. 279 Checking DDM password validation level in use MIMIX Remote Journal support uses the DDM communications infrastructure. This infrastructure can be configured to require a password to be provided when a server connection is made. The MIMIXOWN user profile, which establishes the remote journal connection, ships with a preset password so that it is consistent on all systems. If you have implemented DDM password validation on any systems where MIMIX will be used, you should verify the DDM level in use. If the MIMIXOWN password is not the same on both systems, you may need to change the MIMIXOWN user profile or the DDM security level to allow MIMIX Remote Journal support to function properly. These changes have security implications of which you should be aware. To check the DDM password validation level in use, do the following on both systems: 1. From a command line, type CHGDDMTCPA and press F4 (prompt). 2. Check the value of the Password required field. • • If the value is *NO or *VLDONLY, no further action is required. Press F12 (Cancel). If the field contains any other value, you must take further action to enable MIMIX RJ support to function in your environment. Press F12, then continue with the next step. 3. You have two options for changing your environment to enable MIMIX RJ support to function. Each option has security implications. You must decide which option is best for your environment. The options are: • “Option 1. Enable MIMIXOWN user profile for DDM environment” on page 280. MIMIX must be installed and transfer definitions must exist before you can make the necessary changes. For new installations this should automatically configured for you. “Option 2. Allow user profiles without passwords” on page 281. You can use this option before or after MIMIX is installed. However, this option should be performed before configuring MIMIX RJ support. • Option 1. Enable MIMIXOWN user profile for DDM environment This option changes the MIMIXOWN user profile to have a password and adds server authentication entries to recognize the MIMIXOWN user profile. Do the following from both systems: 1. Access the Work with Transfer Definitions (WRKTFRDFN) display. Then do the following: a. Type a 5 (Display) next to each transfer definition that will be used with MIMIX RJ support and press Enter. b. Page down to locate the value for Relational database (RDB parameter) and record the value indicated. 280 Checking DDM password validation level in use c. If you selected multiple transfer definitions, press Enter to advance to the next selection and record its RDB value. Ensure that you record the values for all transfer definitions you selected. Note: If the RDB value was generated by MIMIX, it will be in the form of the characters MX followed by the System1 definition, System2 definition, and the name of the transfer definition, with up to 18 characters. 2. On the source system, change the MIMIXOWN user profile to have a password and to prevent signing on with the profile. To do this, enter the following command: CHGUSRPRF USRPRF(MIMIXOWN) PASSWORD(user-defined-password) INLMNU(*SIGNOFF) Note: The password is case sensitive and must be the same on all systems in the MIMIX network. If the password does not match on all systems, some MIMIX functions will fail with security error message LVE0127. 3. Verify that the QRETSVRSEC (Retain server security data) system value is set to 1. The value 1 allows the password you specify in the server authentication entry in Step 4 to take effect. DSPSYSVAL SYSVAL(QRETSVRSEC) If necessary, change the system value. 4. You need a server authentication entry for the MIMIXOWN user profile for each RDB entry you recorded in Step 1. To add a server authentication entry, type the following command, using the password you specified in Step 2 and the RDB value from Step 1. Then press Enter. ADDSVRAUTE USRPRF(MIMIXOWN) SERVER(recorded-RDB-value) PASSWORD(user-defined-password) 5. Repeat Step 2 through Step 4 on the target system. Option 2. Allow user profiles without passwords This option changes DDM TCP attributes to allow user profiles without passwords to function in environments that use DDM password validation. Do the following: 1. From a command line on the source system, type CHGDDMTCPA PWDRQD(*VLDONLY) and press Enter. 2. From a command line on the target system, CHGDDMTCPA PWDRQD(*VLDONLY) and press Enter. 281 Starting data groups for the first time Use this procedure when a configuration checklist directs you to start a newly configured data group for the first time. You should have identified the starting point in the journals with “Establish a synchronization point” on page 442 when you synchronized the systems. 1. From the Work with Data Groups display, type a 9 (Start DG) next to the data group that you want to start and press Enter. 2. The Start Data Group (STRDG) display appears. Press Enter to access additional prompts. Do the following: a. Specify the starting point for user journal journal replication. For the Database journal receiver and Database large sequence number prompts specify the information you recorded in Step 5 of “Establish a synchronization point” on page 442. b. Specify the starting point for system journal journal replication. For the Object journal receiver and Object large sequence number prompts specify the information you recorded in Step 6 of “Establish a synchronization point” on page 442. c. Specify *YES for the Clear pending prompt. 3. Press Enter. 4. A confirmation display appears. Press Enter. 5. A second confirmation display appears. Press Enter to start the data group. 282 Identifying data groups that use an RJ link Identifying data groups that use an RJ link Use this procedure to determine which data groups use a remote journal link before you end a remote journal link or remove a remote journaling environment. 1. Enter the command WRKRJLNK and press Enter. 2. Make a note of the name indicated in the Source Jrn Def column for the RJ Link you want. 3. From the command line, type WRKDGDFN and press Enter. 4. For all data groups listed on the Work with DG Definitions display, check the Journal Definition column for the name of the source journal definition you recorded in Step 2. • If you do not find the name from Step 2, the RJ link is not used by any data group. The RJ link can be safely ended or can have its remote journaling environment removed without affecting existing data groups. If you find the name from Step 2 associated with any data groups, those data groups may be adversely affected if you end the RJ link. A request to remove the remote journaling environment removes configuration elements and system objects that need to be created again before the data group can be used. Continue with the next step. • 5. Press F10 (View RJ links). Consider the following and contact your MIMIX administrator before taking action that will end the RJ link or remove the remote journaling environment. • When *NO appears in the Use RJ Link column, the data group will not be affected by a request to end the RJ link or to end the remote journaling environment. Note: If you allow applications other than MIMIX to use the RJ link, they will be affected if you end the RJ link or remove the remote journaling environment. • When *YES appears in the Use RJ Link column, the data group may be affected by a request to end the RJ link. If you use the procedure for ending a remote journal link independently in the Using MIMIX book, ensure that any data groups that use the RJ link are inactive before ending the RJ link. 283 Using file identifiers (FIDs) for IFS objects Commands used for user journal replication of IFS objects use file identifiers (FIDs) to uniquely identify the correct IFS tracking entries to process. The System 1 file identifier and System 2 file identifier prompts ensure that IFS tracking entries are accurately identified during processing. These prompts can be used alone or in combination with the System 1 object prompt. These prompts enable the following combinations: • Processing by object path: A value is specified for the System 1 object prompt and no value is specified for the System 1 file identifier or System 2 file identifier prompts. When processing by object path, a tracking entry is required for all commands with the exception of the SYNCIFS command. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified object path name. • Processing by object path and FIDs: A value is specified for the System 1 object prompt and a value is specified for either or both of the System 1 file identifier or System 2 file identifier prompts. When processing by object path and FIDs, a tracking entry is required for all commands. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified FID values. If the specified object path name does not match the object path name in the tracking entry, the command cannot continue processing. • Processing by FIDs: A value is specified for either or both of the System 1 file identifier or System 2 file identifier prompts and, with the exception of the SYNCIFS command, no value is specified for the System 1 object prompt. In the case of SYNCIFS, the default value *ALL is specified for the System 1 object prompt. When processing by FIDs, a tracking entry is required for all commands. If no tracking entry exists, the command cannot continue processing. If a tracking entry exists, a query is performed using the specified FID values. 284 Configuring restart times for MIMIX jobs Configuring restart times for MIMIX jobs Certain MIMIX jobs are restarted, or recycled, on a regular basis in order to maintain the MIMIX environment. The ability to configure this activity can ease conflicts with your scheduled workload by changing when the MIMIX jobs restart to a more convenient time for your environment. The default operation of MIMIX is to restart MIMIX jobs at midnight (12:00 a.m.). However, you can change the restart time by setting a different value for the Job restart time parameter (RSTARTTIME) on system definitions and data group definitions. The time is based on a 24 hour clock. The values specified in the system definitions and data group definitions are retrieved at the time the MIMIX jobs are started. Changes to the specified values have no effect on jobs that are currently running. Changes are effective the next time the affected MIMIX jobs are started. For a data group definition you can also specify either *SYSDFN1 or the *SYSDFN2 for the Job restart time (RSTARTTIME) parameter. Respectively, these values use the restart time specified in the system definition identified as System 1 or System 2 for the data group. Both system and data group definition commands support the special value *NONE, which prevents the MIMIX jobs from automatically restarting. Be sure to read “Considerations for using *NONE” on page 287 before using this value. Configurable job restart time operation To make effective use of the configurable job restart time, you may need to set the job restart time in as few as one or as many as all of these locations: • • • One or more data group definitions The system definition for the management system The system definitions for one or more network systems. MIMIX system-level jobs affected by the Job restart time value specified in a system definition are: system manager (SYSMGR), system manager receive (SYSMGRRCV), and journal manager (JRNMGR). MIMIX data group-level jobs affected by the Job restart time value specified in a data group definition are: object send (OBJSND), object receive (OBJRCV), database send (DBSND), database receive (DBRCV), database reader (DBRDR), object retrieve (OBJRTV), container send (CNRSND), container receive (CNRRCV), status send (STSSND), status receive (STSRCV), and object apply (OBJAPY). Also, the role of the system on which you change the restart time affects the results. For system definitions, the value you specify for the restart time and the role of the system (management or network) determines which MIMIX system-level jobs will restart and when. For data group definitions, the value you specify for the restart time and the role of the system (source or target) determines which data group-level jobs will restart and when. Time zone differences between systems also influence the results you obtain. MIMIX system-level jobs restart when they detect that the time specified in the system definition has passed. 285 The system manager jobs are a pair of jobs that run between a network system and the management system. The management and network systems both have journal manager jobs, but the jobs operate independently. The job restart time specified in the management system’s system definition determines when to restart the journal manager on the management system. The job restart time specified in the network system’s system definition determines when to restart the journal manager job on the network system, when to restart the system manager jobs on both systems, and also affects when cleanup jobs on both systems are submitted. Table 40 shows how the role of the system affects the results of the specified job restart time. Table 40. System Definition Role Management System Effect of the system’s role on changing the job restart time in a system definition. Effect on Jobs by the value specified Jobs System managers Cleanup jobs Journal managers Collector services Network System System managers Time *NONE Specified value is not used to determine restart time. Restart is determined by value specified for network system. Job on management system restarts at time specified. Jobs on both systems restart when time on the management system reaches the time specified. Jobs are submitted on both systems by system manager jobs after they restart. Job on network system restarts at time specified. Job on management system is not restarted. Jobs are not restarted on either system. Cleanup jobs Jobs are submitted on both systems when midnight occurs on the management system. Job on network system is not restarted. Journal managers Collector services For MIMIX data group-level jobs, a delay of 2 to 35 minutes from the specified time is built into the job restart processing. The actual delay is unique to each job. By distributing the jobs within this range the load on systems and communications is more evenly distributed, reducing bottlenecks caused by many jobs simultaneously attempting to end, start, and establish communications. MIMIX determines the actual restart time for the object apply (OBJAPY) jobs based on the timestamp of the system on which the jobs run. For all other affected jobs, MIMIX determines the actual start time for object or database jobs based on the timestamp of the system on which the OBJSND or the DBSND job runs. Table 41 shows how these key jobs affect when 286 Configuring restart times for MIMIX jobs other data group-level jobs restart. Table 41. Systems on which data group-level jobs run. In each row, the highlighted job determines the restart time for all jobs in the row. Source System Jobs Object send (OBJSND) Object retrieve (OBJRTV) Container send (CNRSND) Status receive (STSRCV) Database send (DBSND) 1 Target System Jobs Object receive (OBJRCV) Container receive (CNRRCV) Status Send (STSSND) Database receive (DBRCV) 1 Database reader (DBRDR) 1 Object apply (OBJAPY) When MIMIX is configured for remote journaling, the DBSND and DBRCV jobs are replaced by the DBRDR job. The DBRDR job restarts when the specified time occurs on the target system. 1 For more information about MIMIX jobs see “Replication job and supporting job names” on page 47. Considerations for using *NONE Attention: The value *NONE for the Job restart time parameter is not recommended. If you specify *NONE in a system definition or a data group definition, you need to develop and implement alternative procedures to ensure that the affected MIMIX jobs are periodically restarted. Restarting the jobs ensures that long running MIMIX jobs are not ended by the system due to resource constraints and refreshes the job log to avoid overflow and abnormal job termination. If you specify the value *NONE for the Job restart time in a data group definition, no MIMIX data group-level jobs are automatically restarted. If you specify the value *NONE for the Job restart time in a system definition, the cleanup jobs started by the system manager will continue to be submitted based on when midnight occurs on the management system. All other affected MIMIX systemlevel jobs will not be restarted. Table 40 shows the effect of the value *NONE. Examples: job restart time “Restart time examples: system definitions” on page 288 and “Restart time examples: system and data group definition combinations” on page 288 illustrate the effect of using the Job restart time (RSTARTTIME) parameter. These examples assume that the system configured as the management system for MIMIX operations is also the target system for replication during normal operation. For each example, consider the effect it would have on nightly backups that complete between midnight and 1 a.m. on the target system. 287 Restart time examples: system definitions These examples show the effect of changing the job restart time only in system definitions. Example 1: MIMIX is running Monday noon when you change the job restart time to 013000 in system definition NEWYORK, which is the management system. The network system’s system definition uses the default value 000000 (midnight). MIMIX remains up the rest of the day. Because the current jobs use values that existed prior to your change, all the MIMIX system-level jobs on NEWYORK automatically restart at midnight. As a result of your change, the journal manager on NEWYORK restarts at 1:30 a.m. Tuesday and thereafter. The network system’s journal manager restarts when midnight occurs on that system. The system manager jobs on both systems restart and submit the cleanup jobs when the management system reaches midnight. Example 2: It is Friday evening and all MIMIX processes on the system CHICAGO are ended while you perform planned maintenance. During that time you change the job restart time to 040000 in system definition CHICAGO, which is a network system. You start MIMIX processing again at 11:07 p.m. so your changes are in effect. The MIMIX system-level jobs that restart Saturday and thereafter at 4 a.m. Chicago time are: • • • The journal manager job on CHICAGO The system manager jobs on the management system and on CHICAGO The cleanup jobs are submitted on the management system and on CHICAGO Because the management system’s system definition uses the default value of midnight, the journal manager on the management system restarts when midnight occurs on that system. Example 3: Friday afternoon you change system definition HONGKONG to have a job restart time value of *NONE. HONGKONG is the management system. LONDON is the associated network system and its system definition uses the default setting 000000 (midnight). You end and restart the MIMIX jobs to make the change effective. The journal manager on HONGKONG is no longer restarted. At midnight (00:00 a.m. Saturday and thereafter) HONGKONG time, the system manager jobs on both systems restart and submit cleanup jobs on both systems. In your runbook you document the new procedures to manually restart the journal manager on HONGKONG. Example 4: Wednesday evening you change the system definitions for LONDON and HONG KONG to both have a job restart time of *NONE. HONGKONG is the management system. You restart the MIMIX jobs to make the change effective. At midnight HONGKONG time, only the cleanup jobs on both systems are submitted. In your runbook you document the new procedures to manually restart the journal managers and system managers. Restart time examples: system and data group definition combinations These examples show the effect of changing the job restart time in various combinations of system definitions and data group definitions. 288 Configuring restart times for MIMIX jobs Example 5: You have a data group that operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both the system definitions and the data group definition use the default value 000000 (midnight) for the job restart time. For both systems, the MIMIX system-level jobs restart at midnight. The data group jobs on both systems restart between 2 and 35 minutes after midnight. Example 6: 10:30 Tuesday morning you change data group definition APP1 to have a job restart time value of 013500. The data group operates between SYSTEMA and SYSTEMB, which are both in the same time zone. Both system definitions use the default restart time of midnight. MIMIX jobs remain up and running. At midnight, the system-level jobs on both systems restart using the values from the preexisting configuration; the data group-level jobs restart on both systems between 0:02 and 0:35 a.m. On Wednesday and thereafter, APP1 data group-level jobs restart between 1:37 and 2:10 a.m. while the MIMIX system-level jobs and jobs for other data groups restart at midnight. Example 7: You have a data group that operates between SYSTEMA and SYSTEMB which are both in the same time zone and are defined as the values of System 1 and System 2, respectively. The data group definition specifies a job restart time value of *SYSDFN2. The system definition for SYSTEMA specifies the default job restart time of 000000 (midnight). SYSTEMB is the management system and its system definition specifies the value *NONE for the job restart time. The journal manager on SYSTEMB does not restart and the data group jobs do not restart on either system because of the *NONE value specified for SYSTEMB. The journal manager on SYSTEMA restarts at midnight. System manager jobs on both systems restart and submit cleanup jobs at midnight as a result of the value in the network system and the fact that the systems are in the same time zone. Example 8A: You have a data group defined between CHICAGO and NEWYORK (System 1 and System 2, respectively) and the data group’s job restart time is set to 030000 (3 a.m.). CHICAGO is the source system as well as a network system; its system definition uses the default job restart time of midnight. NEWYORK is the target system as well as the management system; its system definition uses a job restart time of 020000 (2 a.m.). There is a one hour time difference between the two systems; said another way, NEWYORK is an hour ahead of CHICAGO. Figure 17 shows the effect of the time zone difference on this configuration. The journal manager on CHICAGO restarts at midnight Chicago time and the journal manager on NEWYORK restarts at 2 a.m. New York time. The system manager jobs on both systems restart when the management system (NEWYORK) reaches the restart time specified for the network system (CHICAGO). The cleanup jobs are submitted by the system manager jobs when they restart. With the exception of the object apply jobs (OBJAPY), the data group jobs restart during the same 2 to 35 minute timeframe based on Chicago time (between 2 and 35 minutes after 3 a.m. in Chicago; after 4 a.m. in New York). Because the OBJAPY jobs are based on the time on the target system, which is an hour ahead of the source 289 290 . In this scenario. the MIMIX environment is configured to use MIMIX Remote Journal support.m. New York time.system time used for the other jobs. This environment is configured to use MIMIX Remote Journal support. Results of Example 8A. This is configured as a standard MIMIX environment. Because the database send (DBSND) and database receive (DBRCV) jobs are not used in a remote journaling environment. Results of example 8B. those jobs do not restart. Figure 18 shows that the database reader (DBRDR) job restarts based on the time on the target system. Figure 18. the OBJAPY jobs restart between 3:02 and 3:35 a. Figure 17. Example 8B: This scenario is the same as example 8A with one exception. The value for the Job restart time is retrieved from the system definition at the time the jobs are started. the complete time format must be specified. do the following: 1.). On the Work with Data Group Definitions display. see “Considerations for using *NONE” on page 287. The value 000000 is the default and is equivalent to midnight (00:00:00 a. Valid values range from 000000 to 235959. specify the value you want. For more information. The change is effective the next time the jobs are started. If you specify *NONE. 2. cleanup jobs are submitted on both the network and management systems based on when midnight occurs on the management system. the complete time format must be specified. Notes: • The time is based on a 24 hour clock. type a 2 (Change) next to the system definition you want and press F4 (Prompt). and must be specified in HHMMSS format. 3. Although seconds are ignored. At the Job restart time prompt. On the Work with System Definitions display. 2. Notes: • The time is based on a 24 hour clock. • 291 . System manager and journal manager jobs will not restart. then scroll down to the bottom of the display. type a 2 (Change) next to the data group definition you want and press F4 (Prompt). specify the value you want. and must be specified in HHMMSS format.m. 3. press Enter. Press F10 (Additional parameters). At the Job restart time prompt. Valid values range from 000000 to 235959. Press F10 (Additional parameters). Configuring the restart time in a data group definition To configure the restart time for MIMIX data group-level jobs in an existing environment. do the following: 1. For more information. • 4. The value 000000 is the default and is equivalent to midnight (00:00:00 a. You need to consider the effect of any time zone differences between the systems defined to the data group. You need to consider the role of the system definition (management or network system) and the effect of any time zone differences between the management system and the network system.m. The value *NONE is not recommended. see “Considerations for using *NONE” on page 287. Although seconds are ignored.Configuring restart times for MIMIX jobs Configuring the restart time in a system definition To configure the restart time for MIMIX system-level jobs in an existing environment. then scroll down to the bottom of the display. To accept the change. The change has no effect on jobs that are currently running.). The value *NONE is not recommended. 4. The value for the Job restart time is retrieved at the time the jobs are started. The change is effective the next time the jobs are started. 292 . Changes have no effect on jobs that are currently running. To accept the change. press Enter. ending. and verifying journaling for physical files identified by data group file entries. However. “Journaling for IFS objects” on page 300 includes procedures for displaying journaling status. and verifying journaling This chapter describes procedures for starting and ending journaling. • • • • 293 . as well as the authority requirements necessary for user profiles that create the objects to be journaled when they are created.CHAPTER 14 Starting. for supported configuration scenarios. IFS objects. and verifying journaling for IFS objects replicated cooperatively (advanced journaling). what types of objects must have journaling started before replication can occur. ending journaling. Journaling must be active on all files. there are times when you may need to start or end journaling on items identified to a data group. Normally. It also describes when journaling is started implicitly. starting journaling. The topics in this chapter include: • “What objects need to be journaled” on page 294 describes. IFS tracking entries are used in these procedures. data areas and data queues that you want to replicate through a user journal. starting journaling. ending journaling. and verifying journaling for data area and data queue objects replicated cooperatively (advanced journaling). starting journaling. “Journaling for data areas and data queues” on page 303 includes procedures for displaying journaling status. ending journaling. “MIMIX commands for starting journaling” on page 296 identifies the MIMIX commands available for starting journaling and describes the checking performed by the commands. IFS tracking entries are used in these procedures. “Journaling for physical files” on page 297 includes procedures for displaying journaling status. journaling is started during configuration. Object auditing is automatically set for all objects defined to a data group when the data group is first started. inheritance is permitted. Although MIMIX commands for starting journaling are preferred. STRJRNOBJ) to start journaling if you have the appropriate authority for starting journaling. data queues. Starting journaling ensures that changes to the objects are recorded in the user journal. specify INHERIT(*YES). you can also use IBM commands (STRJRNPF. Database files created by SQL statements . data queues and IFS objects. and DLO entries are configured. data queues. or IFS objects when certain requirements are met. each entry specifies an object auditing value that determines the type of activity on the objects to be logged in the journal. Requirements for implicit starting of journaling . or IFS objects also require that journaling be started on the associated object tracking entries and IFS tracking entries. or DLO entries for the data group. IFS entries. The user ID creating the new objects must have the required authority to start journaling and the following requirements must be met: • IFS objects . Typically.A new file created by a CREATE • 294 . Both MIMIX Dynamic Apply and legacy cooperative processing use data group file entries and therefore require journaling to be started. Configurations that include advanced journaling for replication of data areas. When data group object entries. and STRJRNOBJE simplify the process of starting journaling. no special action is need. During initial configuration. data areas.A new IFS object is automatically journaled if the directory in which it is created is journaled as a result of a request that permitted journaling inheritance for new objects. STRJRNIFSE. and object tracking entries. data areas. IFS entries.What objects need to be journaled A data group can be configured in a variety of ways that involve a user journal in the replication of files. STRJRN. the configuration checklists direct you when to start journaling for objects identified by data group file entries. Because security auditing logs the object changes in the system journal. see “MIMIX commands for starting journaling” on page 296. Journaling must be started for any object to be replicated through a user journal or to be replicated by cooperative processing between a user journal and the system journal. and are therefore available for MIMIX to replicate. For more information about these commands. if MIMIX started journaling on the parent directory. Events are logged in this journal to create a security audit trail. the security audit (QAUDJRN) journal. Requirements for user journal replication . The MIMIX commands STRJRNFE. respectively. This will allow IFS objects created within the journaled directory to inherit the journal options and journal state of the parent directory. or any time a change is made to the object entries.Journaling can be automatically started for newly created database files. IFS tracking entries. If you manually start journaling on the parent directory using the IBM command STRJRN.System journal replication processes use a special journal.User journal replication processes require that the journaling be started for the objects identified by data group file entries. Requirements for system journal replication . The default value (*DFT) for the Journal at creation (JRNATCRT) parameter in the data group definition enables MIMIX to create the QDFTJRN data area in a library and enable the data area for automatic journaling for an object type. STRJRNOBJE) to start journaling. If necessary.The operating system will automatically journal a new object if it is created in a library that contains a QDFTJRN data area and the data area has enabled automatic journaling for the object type. QRCL*. MIMIX evaluates all data group object entries for each object type.) Entries properly configured to allow cooperative processing of the object type determine whether MIMIX will create the QDFTJRN data area. MIMIX uses the data group entry with the most specific match to the object type and library that also specifies *ALL for its System 1 object (OBJ1) and Attribute (OBJATR). QRPL*. QGPL. For example. (Entries for *FILE objects are only evaluated when the data group specifies COOPJRN(*USRJRN). Note: MIMIX prevents the QDFTJRN data area from being created the following libraries: QSYS*. Authority requirements must be met to enable the automatic journaling of newly created objects and if you use IBM commands to start journaling instead of MIMIX commands. When the QDFTJRN data area in a library is enabled for an object type. 295 . *DTAQ objects . data areas. QRCY*. not just those which are eligible for replication. LIB1(MYLIB) OBJ1(*ALL) OBJTYPE(*FILE) OBJATR(*ALL) COOPDB(*YES) PRCTYPE(*INCLD) LIB1(MYLIB) OBJ1(MYAPP) OBJTYPE(*FILE) OBJATR(DSPF) COOPDB(*YES) PRCTYPE(*INCLD) Authority requirements for starting journaling Normal MIMIX processes run under the MIMIXOWN user profile.What objects need to be journaled TABLE statement is automatically journaled if the library in which it is created contains a journal named QSQJRN. MIMIX changes public authority so the user ID in use has the appropriate authority to start journaling. QUSR*. The second entry is not considered in the determination because its OBJ1 and OBJATR values do not meet these requirements. MIMIX checks the public authority (*PUBLIC) for the journal. which ships with *ALLOBJ special authority. *DTAARA. it is not necessary for other users to account for journaling authority requirements when using MIMIX commands (STRJRNFE. it would use the first entry when determining whether to create the QDFTJRN data area because it is the most specific entry that also meets the OBJ1(*ALL) and OBJATR(*ALL) requirements. • New *FILE. or data queues for which you expect automatic journaling at creation. QRPL*. When the MIMIX journal managers are started. or when the Build Journaling Environment (BLDJRNENV) command is used. QTEMP and SYSIB*. Therefore. QRECOVERY. the user ID creating these objects must have the required authority to start journaling. all new objects of that type are journaled. When the data group is started. STRJRNIFSE. • If you create database files. QSPL*. if MIMIX finds only the following data group object entries for library MYLIB. or object tracking entries associated with the command’s object class must be loaded.This command starts journaling of data area and data queue objects configured for advanced journaling. Start Journaling Obj Entries (STRJRNOBJE) . IFS tracking entry. If the file or object is journaled to the correct journal. MIMIX commands for starting journaling Before you use any of the MIMIX commands for starting journaling. The MIMIX commands for starting journaling are: • • Start Journal Entry (STRJRNFE) .This command starts journaling of IFS objects configured for advanced journaling. one of the following authority requirements must be satisfied: • • • The user profile of the user attempting to start journaling for an object must have *ALLOBJ special authority. Public authority (*PUBLIC) must have *OBJALTER. *OBJMGT. For journaling to be successfully started on an object. and *OBJOPR object authorities for the journal to which the object is to be journaled.This command starts journaling for files identified by data group file entries. If the file or object is not journaled to the correct journal or the attempt to start journaling fails. the user ID that performs the start journaling request must have the appropriate authority requirements. • If you attempt to start journaling for a data group file entry. IFS tracking entries. or data queue is journaled to the journal associated with the data group. an error occurs and the journaling status is changed to *NO. STRJRNOBJ) to start journaling. the journaling status of the data group file entry. the data group file entries. IFS object. Data group IFS entries must be configured and IFS tracking entries be loaded (LODDGIFSTE command) before running the STRJRNIFSE command to start journaling. IFS tracking or object tracking entry is changed to *YES. STRJRN.• If you use the IBM commands (STRJRNPF. MIMIX checks that the physical file. data area. The user profile of the user attempting to start journaling for an object must have explicit *ALL object authority for the journal to which the object is to be journaled. or object tracking entry and the files or objects associated with the entry are already journaled. 296 . Data group object entries must be configured and object tracking entries be loaded (LODDGOBJTE command) before running the STRJRNOBJE command to start journaling. Start Journaling IFS Entries (STRJRNIFSE) . On the Work with Data Groups display. end. Do the following: 1. press Enter. Then do one of the following: • • To start journaling using the command defaults. This procedure invokes the Start Journal Entry (STRJRNFE) command. The Data group definition prompts and the System 1 file prompts identify your selection. At the right side of the display. or verify journaling for physical files. they may have an initial status of *ACTIVE. From the Work with DG File Entries display. The initial view shows the current and requested status of the data group file entry. Note: Logical files will have a status of *NA. Do the following: 1. the Journaled System 1 and System 2 columns indicate whether the physical file associated with the file entry is journaled on each system. In order for replication to occur. The Start Journal Entry (STRJRNFE) display appears. Accept these values or specify the values you want. journaling must be started for the file on the source system. 3. 2. type 17 (File entries) next to the data group you want and press Enter. type a 9 (Start journaling) next to the file entries you want. Data group file entries exist for logical files only in data groups configured for MIMIX Dynamic Apply. The Work with DG File Entries display appears. 297 . From the MIMIX Intermediate Main Menu. type 1 and press Enter to access the Work with Data Groups display. In order for replication to occur. When data group file entries are added to a configuration. press F4 (Prompt) then continue with the next step. Starting journaling for physical files Use this procedure to start journaling for physical files identified by data group file entries. journaling must be started for the files on the source system. The command can also be entered from a command line.Journaling for physical files Journaling for physical files Data group file entries identify physical files to be replicated. 2. 3. Access the journaled view of the Work with DG File Entries display as described in “Displaying journaling status for physical files” on page 297. and to start. Displaying journaling status for physical files Use this procedure to display journaling status for physical files identified by data group file entries. To modify command defaults. Press F10 (Journaled view). This topic includes procedures to display journaling status. However. the physical files which they identify may not be journaled. To start journaling for the physical file associated with the selected data group. 4. 6. For example. If you want to end journaling outside of MIMIX. *SRC. press Enter. use the ENDJRNPF command. specify *YES for the Submit to batch prompt.4. to prepare for upgrading MIMIX software. If you want to end journaling for all files in the library. or *TGT is specified. To modify additional prompts for the command. • • To end journaling using command defaults. To end journaling. any changes to that file are not captured and are not replicated. This procedure invokes the End Journaling File Entry (ENDJRNFE) command. If you want to use batch processing. press F4 (Prompt) and continue with the next step. When *DGDFN. 5. press Enter. The command can also be entered from a command line. specify *ALL at the System 1 file prompt. 5. Access the journaled view of the Work with DG File Entries display as described in “Displaying journaling status for physical files” on page 297. or to correct an error. press Enter. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. Press F4 to see a list of valid values. Journaling is ended. Specify the value you want for the Start journaling on system prompt. Ending journaling for physical files Use this procedure to end journaling for a physical file associated with a data group file entry. Once journaling for a file is ended. If you want to use batch processing. Press F4 to see a list of valid values. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. type a 10 (End journaling) next to the file entry you want and do one of the following: Note: MIMIX cannot end journaling on a file that is journaled to the wrong journal. 3. You may need to end journaling if a file no longer needs to be replicated. From the Work with DG File Entries display. specify *YES for the Submit to batch prompt. 298 . The End Journal File Entry (ENDJRNFE) display appears. 2. do the following: 1. The system returns a message to confirm the operation was successful. a file that does not match the journal definition for that data group. When *DGDFN. 6. To end journaling. Specify the value you want for the End journaling on system prompt. or *TGT is specified. *SRC. The Verify Journaling File Entry (VFYJRNFE) display appears. Access the journaled view of the Work with DG File Entries display as described in “Displaying journaling status for physical files” on page 297. do the following: 1. The Data group definition prompts and the System 1 file prompts identify your selection. 4. press Enter. Specify the value you want for the Verify journaling on system prompt. When these conditions are met. 2. 299 .Journaling for physical files Verifying journaling for physical files Use this procedure to verify if a physical file defined by a data group file entry is journaled correctly. This procedure invokes the Verify Journaling File Entry (VFYJRNFE) command to determine whether the file is journaled and whether it is journaled to the journal defined in the journal definition. To modify additional prompts for the command. When *DGDFN is specified. To verify journaling for a physical file. Accept these values or specify the values you want. type a 11 (Verify journaling) next to the file entry you want and do one of the following: • • To verify journaling using command defaults. The command can also be entered from a command line. specify *YES for the Submit to batch prompt 6. Press Enter. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) when determining where to verify journaling. If you want to use batch processing. 3. press F4 (Prompt) and continue with the next step. 5. the journal status on the Work with DG File Entries display is set to *YES. From the Work with DG File Entries display. the Journaled System 1 and System 2 columns indicate whether the IFS object identified by the tracking is journaled on each system. type 1 and press Enter to access the Work with Data Groups display. You should be aware of the information in “Long IFS path names” on page 107 Displaying journaling status for IFS objects Use this procedure to display journaling status for IFS objects identified by IFS tracking entries. press Enter. Do the following: 1. Access the journaled view of the Work with DG IFS Trk. The command can also be entered from a command line. type 50 (IFS trk entries) next to the data group you want and press Enter. or verify journaling for IFS objects identified for replication through the user journal. To modify the command defaults. 2. The initial view shows the object type and status at the right of the display. The Work with DG IFS Trk. 4. and to start. Starting journaling for IFS objects Use this procedure to start journaling for IFS objects identified by IFS tracking entries. 3. However. Then do one of the following: • • To start journaling using the command defaults. journaling must be started on the source system for the IFS objects identified by IFS tracking entries. type a 9 (Start journaling) next to the IFS tracking entries you want. 2. 3. The Data group 300 . In order for replication to occur. To start journaling for IFS objects. Entries display as described in “Displaying journaling status for IFS objects” on page 300. At the right side of the display. end. This procedure invokes the Start Journaling IFS Entries (STRJRNIFSE) command. do the following: 1. From the Work with DG IFS Trk. This topic includes procedures to display journaling status. Entries display appears.Journaling for IFS objects IFS tracking entries are loaded for a data group after the data group IFS entries have been configured for replication through the user journal (advanced journaling). Press F10 (Journaled view). On the Work with Data Groups display. The Start Journaling IFS Entries (STRJRNIFSE) display appears. Entries display. load the IFS tracking entries for the data group. Use the procedure in “Loading IFS tracking entries” on page 257. loading IFS tracking entries does not automatically start journaling on the IFS objects they identify. From the MIMIX Intermediate Main Menu. If you have not already done so. These references go to different files in different books. press F4 (Prompt) and continue with the next step. Either accept the default values or specify other values.Journaling for IFS objects definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. When the command is invoked from a command line. When *DGDFN. specify *YES for the Submit to batch prompt and press Enter. 3. *SRC. 1. Access the journaled view of the Work with DG IFS Trk. 6. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. Specify the value you want for the Start journaling on system prompt. you can change values specified for the IFS objects prompts. When the command is invoked from a command line. The command can also be entered from a command line. You cannot change the values2. Then do one of the following: • • To end journaling using the command defaults. To modify the command defaults. press F4 (Prompt) and continue with the next step. You cannot change the values shown for the IFS objects prompts1. To end journaling for IFS objects. *SRC. Entries display as described in “Displaying journaling status for IFS objects” on page 300. type a 10 (End journaling) next to the IFS tracking entries you want. Specify the value you want for the End journaling on system prompt. 5. The System 1 file identifier and System 2 file identifier prompts identify the file identifier (FID) of the IFS object on each system. Also. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. press Enter. 2. Entries display. The FID values can be used alone or in combination with the IFS object path name. press Enter. or *TGT is specified. Additional prompts for Job description and Job name appear. 7. Then you can optionally specify the unique FID for the IFS object on either system. you can specify as many as 300 object selectors by using the + for more values prompt. 301 . This procedure invokes the End Journaling IFS Entries (ENDJRNIFSE) command. Press F4 to see a list of valid values. To use batch processing. do the following: 1. From the Work with DG IFS Trk. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. To start journaling on the IFS objects specified. or *TGT is specified. The End Journaling IFS Entries (ENDJRNIFSE) display appears. use F10 to see the FID prompts. When *DGDFN. You cannot change the values shown for the IFS objects prompts1. 2. Press F4 to see a list of valid values. Ending journaling for IFS objects Use this procedure to end journaling for IFS objects identified by IFS tracking entries. 4. 8. 302 . To use batch processing. Press F4 to see a list of valid values. 4. Additional prompts for Job description and Job name appear. Either accept the default values or specify other values. You cannot change the values shown for the IFS objects prompts1. and whether it is journaled with the attributes defined in the data group definition. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. 6. When *DGDFN is specified. do the following: 1. To modify the command defaults. This procedure invokes the Verify Journaling IFS Entries (VFYJRNIFSE) command to determine whether the IFS object is journaled.5. specify *YES for the Submit to batch prompt and press Enter. whether it is journaled to the journal defined in the data group definition. 7. press Enter. You cannot change the values shown2. The Verify Journaling IFS Entries (VFYJRNIFSE) display appears. specify *YES for the Submit to batch prompt and press Enter. press F4 (Prompt) and continue with the next step. To use batch processing. 7. Either accept the default values or specify other values. Specify the value you want for the Verify journaling on system prompt. Access the journaled view of the Work with DG IFS Trk. 2. 6. 5. Entries display. The System 1 file identifier and System 2 file identifier identify the file identifier (FID) of the IFS object on each system. To verify journaling on the IFS objects specified. Verifying journaling for IFS objects Use this procedure to verify if an IFS object identified by an IFS tracking entry is journaled correctly. You cannot change the values shown2. To end journaling on the IFS objects specified. 3. press Enter. Then do one of the following: • • To verify journaling using the command defaults. To verify journaling for IFS objects. From the Work with DG IFS Trk. press Enter. type a 11 (Verify journaling) next to the IFS tracking entries you want. “Using file identifiers (FIDs) for IFS objects” on page 284. Additional prompts for Job description and Job name appear. The Data group definition and IFS objects prompts identify the IFS object associated with the tracking entry you selected. Entries display as described in “Displaying journaling status for IFS objects” on page 300. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required. The command can also be entered from a command line. From the Work with DG Obj. Starting journaling for data areas and data queues Use this procedure to start journaling for data areas and data queues identified by object tracking entries. Trk. press F4 (Prompt) and continue with the next step. the Journaled System 1 and System 2 columns indicate whether the object identified by the tracking is journaled on each system. The initial view shows the object type and status at the right of the display. 2. end. The Data group definition and Objects prompts identify the object associated with the 303 . Entries display. This topic includes procedures to display journaling status. The Start Journaling Obj Entries (STRJRNOBJE) display appears. From the MIMIX Intermediate Main Menu. If you have not already done so. Use the procedure in “Loading object tracking entries” on page 258. and to start. This procedure invokes the Start Journaling Obj Entries (STRJRNOBJE) command. On the Work with Data Groups display. To modify the command defaults. 3. Trk. At the right side of the display. 3. or verify journaling for data areas and data queues identified for replication through the user journal. The command can also be entered from a command line. journaling must be started for the objects on the source system for the objects identified by object tracking entries. Displaying journaling status for data areas and data queues To check journaling status for data areas and data queues identified by object tracking entries. press Enter. In order for replication to occur.Journaling for data areas and data queues Journaling for data areas and data queues Object tracking entries are loaded for a data group after the data group object entries have been configured replication through the user journal (advanced journaling). loading object tracking entries does not automatically start journaling on the objects they identify. Entries display appears. Then do one of the following: • • To start journaling using the command defaults. 2. To start journaling for data areas and data queues. type 1 and press Enter to access the Work with Data Groups display. load the object tracking entries for the data group. Access the journaled view of the Work with DG Obj. The Work with DG Obj. do the following: 1. Press F10 (Journaled view). type 52 (Obj trk entries) next to the data group you want and press Enter. Entries display as described in “Displaying journaling status for data areas and data queues” on page 303. 4. Do the following: 1. However. Trk. type a 9 (Start journaling) next to the object tracking entries you want. Additional prompts for Job description and Job name appear. 2. Ending journaling for data areas and data queues Use this procedure to end journaling for data areas and data queues identified by object tracking entries. press F4 (Prompt) and continue with the next step. Specify the value you want for the Start journaling on system prompt. From the Work with DG Obj. do the following: 1. The command can also be entered from a command line. press Enter. When *DGDFN. it is not recommended unless the command was invoked from a command line. it is not recommended unless the command was invoked from a command line. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and starts or prevents journaling from starting as required. *SRC. 4. To end journaling for data areas and data queues. 304 . Either accept the default values or specify other values. type a 10 (End journaling) next to the object tracking entries you want. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and ends or prevents journaling from ending as required. *SRC. or *TGT is specified. Although you can change the values shown for these prompts. 3. Then do one of the following: • • To verify journaling using the command defaults. Trk. press Enter. Entries display. or *TGT is specified. Press F4 to see a list of valid values. 7. The Data group definition and IFS objects prompts identify the object associated with the tracking entry you selected. 5. To modify the command defaults. specify *YES for the Submit to batch prompt and press Enter. Press F4 to see a list of valid values. When *DGDFN. 6. To end journaling on the objects specified. Access the journaled view of the Work with DG Obj. 5. To use batch processing. This procedure invokes the End Journaling Obj Entries (ENDJRNOBJE) command. Specify the value you want for the End journaling on system prompt. Although you can change the values shown for these prompts. Either accept the default values or specify other values. Trk. Entries display as described in “Displaying journaling status for data areas and data queues” on page 303. To use batch processing. Additional prompts for Job description and Job name appear.tracking entry you selected. The End Journaling Obj Entries (ENDJRNOBJE) display appears. specify *YES for the Submit to batch prompt and press Enter. To start journaling on the objects specified. press Enter. 6. MIMIX considers whether the data group is configured for journaling on the target system (JRNTGT) and verifies journaling on the appropriate systems as required. From the Work with DG Obj. and whether it is journaled with the attributes defined in the data group definition. To verify journaling for objects. Entries display as described in “Displaying journaling status for data areas and data queues” on page 303. Press F4 to see a list of valid values. Either accept the default values or specify other values. Trk. Access the journaled view of the Work with DG Obj. Trk. 5. it is not recommended unless the command was invoked from a command line. The Data group definition and Objects prompts identify the object associated with the tracking entry you selected. press F4 (Prompt) and continue with the next step. The command can also be entered from a command line. type a 11 (Verify journaling) next to the object tracking entries you want. whether it is journaled to the journal defined in the data group definition. To use batch processing. Entries display. Although you can change the values shown for these prompts. To verify journaling on the objects specified. The Verify Journaling Obj Entries (VFYJRNOBJE) display appears. Specify the value you want for the Verify journaling on system prompt. specify *YES for the Submit to batch prompt and press Enter. Additional prompts for Job description and Job name appear. 2. 305 . This procedure invokes the Verify Journaling Obj Entries (VFYJRNOBJE) command to determine whether the object is journaled. When *DGDFN is specified. 4. press Enter. Then do one of the following: • • To verify journaling using the command defaults. 6. To modify the command defaults. 3. press Enter.Journaling for data areas and data queues Verifying journaling for data areas and data queues Use this procedure to verify if an object identified by an object tracking entry is journaled correctly. do the following: 1. • • • 306 . “Configuring high volume objects for better performance” on page 317 describes how to change your configuration to improve system journal performance. “Configuring for high availability journal performance enhancements” on page 309 describes journal caching and journal standby state within MIMIX to support IBM’s High Availability Journal Performance IBM i option 42. Journal Standby feature and Journal caching. • MIMIX performance: The following topics describe how to improve MIMIX performance: • “Caching extended attributes of *FILE objects” on page 313 describes how to change the maximum size of the cache used to store extended attributes of *FILE objects replicated from the system journal. “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on page 314 describes how you can improve object send performance by changing the size of the block of data from a receive journal entry (RCVJRNE) call and delaying the next call based on a percentage of the requested block size.Configuring for improved performance CHAPTER 15 Configuring for improved performance This chapter describes how to modify your configuration to use advanced techniques to improve journal performance and MIMIX performance. Requirements and restrictions are included. Journal performance: The following topics describe how to improve journal performance: • “Minimized journal entry data” on page 307 describes benefits of and restrictions for using minimized user journal entries for *FILE and *DTAARA objects. “Improving performance of the #MBRRCDCNT audit” on page 318 describes how to use the CMPRCDCNT commit threshold policy to limit comparisons and thereby improve performance of this audit in environments which use commitment control. A discussion of large object (LOB) data in minimized entries and configuration information are included. For example. With *FLDBDY. Minimized journal entries cannot be used when MIMIX support for keyed replication is in use. When database files have records with static LOB values. file data for modified fields is minimized on field boundaries. Configuring for minimized journal entry data may affect your ability to use the Work with Data Group File Entries on Hold (WRKDGFEHLD) command. Factors in your environment such as the percentage of journal entries that are updates (R-UP). entry-specific data is viewable and may be used for auditing purposes. In a MIMIX replication environment. The benefit of using minimized journal entries is that less data is stored in the journal. the number of bytes typically changed in an update. Restrictions of minimized journal entry data The following MIMIX and operating system restrictions apply: • If you plan to use keyed replication do not use minimized journal entry data. the size of database records. This support is enabled in the MIMIX create or change journal definitions commands and built using the Build Journal Environment (BLDJRNENV) command. The minimizing resulting from specifying *FILE does not occur on field boundaries. do not configure for minimized entry data. The IBM i provides the ability to create journal entries using an internal format that minimizes the data specific to these object types that are stored in the journal entry. since the key may not be present in a minimized journal entry. When *FLDBDY is specified. If MINENTDTA(*FILE) or MINENTDTA(*FLDBDY) is in effect and a database record includes LOB fields. LOB data is journaled only when that LOB is changed. When a journal entry for one of these object types is generated. the entry specific data may not be viewable and may not be used for auditing purposes. Minimized before-images cannot be selected for automatic before-image synchronization checking. Therefore.Minimized journal entry data Minimized journal entry data MIMIX supports the ability to process minimized journal entries placed in a user journal for object types of file (*FILE) and data area (*DTAARA). • 307 . the system compares the size of the minimized format to the standard format and places whichever is smaller in the journal. will result in failure when applied. you also benefit by having less data sent over communications lines and saved in MIMIX log spaces. For database files. minimized journal entries can produce considerable savings. may influence how much benefit you achieve. consider the effect of how data is minimized. Even if you do not rely on full image captures for auditing purposes. only update journal entries (R-UP and RUB) and rollback-type update entries (R-BR and R-UR) can be minimized. using option 2 (Change) on WRKDGFEHLD to convert a minimized record update (RUP) to a record put (RPT). Changes to other fields in the record will not cause the LOB data to be journaled unless the LOB is also changed. • Your environment may impose additional restrictions: • • If you rely on full image captures in the receiver as part of your auditing rules. 4. 2. Backup and Recovery for restrictions and usage of journal entries with minimized entry-specific data. type 14 (Build) next to the definition you just modified on the Work with Journal Definitions display and press Enter.RPTs requires the presence of a full. use option 2 (Change) to access the journal definition you want. 3. Press F10 (Additional parameters) to access the Minimize entry specific data prompt. Page down to the bottom of the display. From the Work with Journal Definitions display. press Enter twice to see all prompts for the display. Configuring for minimized journal entry data By default. To do this. Specify the values you want at the Minimize entry specific data prompt and press Enter. On the following display. do the following: 1. See the IBM book. non-minimized. you must build the journaling environment using the updated journal definition. 308 . 5. In order for the changes to be effective. To enable MIMIX to use minimized journal entry data for specific object types. MIMIX user journal replication processes use complete journal entry data. record. is to change the journal state to active. also see the topics on journal management and system performance in the IBM eServer iSeries Information Center. although when used together. see “Restrictions of high availability journal performance enhancements” on page 311. All that is necessary prior to switching. Journal standby state minimizes replication impact on the target system by providing the benefits of an active journal without writing the journal entries to disk.Configuring for high availability journal performance enhancements Configuring for high availability journal performance enhancements MIMIX supports IBM’s High Availability Journal Performance IBM i option 42. then using journal standby state may offer a benefit in reduced switch time. commitment control cannot be used for files that are journaled to any journal in standby state. Journal standby state and journal caching can be used in source send configuration environments as well as in environments where remote journaling is enabled. Journal Standby feature and Journal caching. As such. Journal caching is particularly helpful during batch operations when large numbers of add. These high availability performance enhancements improve replication performance on the target system and provide significant performance improvement by eliminating the need to start journaling at switch time. journal standby state is particularly helpful in saving disk space in environments that do not rely on journal entries for other purposes. Most referential constraints cannot be used when the journal is in standby state. When a journal is in standby state. When journal standby state is not an option because of these 309 . MIMIX support of IBM’s high availability performance enhancements consists of two independent components: journal standby state and journal caching. journal standby state minimizes switch times by retaining the journal relationship for replicated objects. However. journal standby state can provide a performance improvement on the apply session. it is not necessary to start journaling for objects on the target system prior to switching. These components work individually or together. If you are not using journaling on target and want to have a switchable data group. If you are journaling on apply. Journal caching provides a means by which to cache journal entries and their corresponding database records into main storage and write to disks only as necessary. Journal standby state Journal standby state minimizes replication impact by providing the benefits of an active journal without writing the journal entries to disk. Moreover. As such. Note: For more information. Moreover. update. journal standby state is particularly helpful in saving disk space in environments that do not rely on journal entries for other purposes. and delete operations against journaled objects are performed. You can start or stop journaling while the journal standby state is enabled. each component must be enabled separately. For restrictions of MIMIX support of IBM’s high availability performance enhancements. Note: For purposes of this document. update. which allows you to change the target access path recovery time for the system. and delete operations against journaled objects are performed. Because most database transactions must no longer wait for a synchronous write of the journal entries to disk. For example. Minimizing potential performance impacts of standby state It is possible to experience degraded performance of database apply (DBAPY) processing after enabling journal standby state. only MIMIX parameters are described in detail. Deciding to use standby state is a trade off between run-time performance and IPL duration. The default value for journal caching is *BOTH. For example. or file end of data. it can cause potentially longer initial program loads (IPL). Journal caching Journal caching is an attribute of the journal that is defined. the performance gain can be significant. close. You can reduce potential impacts by using the Change Recovery for Access Paths (CHGRCYAP) command. This means that neither the journal entries nor their corresponding database records are written to disk until an efficient disk write can be scheduled. the system caches journal entries and their corresponding database records into main storage.restrictions. type the following and press Enter: CHGRCYAP 2. Journal caching can be helpful during batch operations when large numbers of add. offers equivalent and complementary function to the MIMIX parameter Target journal state (TGTSTATE). MIMIX processing of high availability journal performance enhancements You can enable both journal standby state and journal caching using a combination of MIMIX and IBM commands. On a command line. To enable journal standby state or journal caching in a MIMIX environment. It is recommended that you use the default value of *BOTH to perform journal caching on both the source and the target systems. specify *ELIGIBLE to include only eligible access paths in the recovery time specification. Note: While this procedure improves performance. batch operations must usually wait for each new journal entry to be written to disk. available on the IBM command Change Journal (CHGJRN). see IBM’s Redbooks Technote””Journal Caching: Understanding the Risk of Data Loss”. When journal caching is enabled. See “Journal caching” on page 310. journal caching can be used as an alternative. the Journal state (JRNSTATE) parameter. two parameters have been added to the Create Journal Definition (CRTJRNDFN) and 310 . At the Include access paths prompt. Do the following: 1. For more information about journal caching. This usually occurs when the buffer is full. or at the first commit. be aware of the following restrictions documented by IBM: • Do not use these high availability performance enhancements in conjunction with commitment control. When *STANDBY is specified. An additional value. Valid values for the TGTSTATE parameter are *ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated with the journal definition is journaling on the target system (JRNTGT(*YES)). As such. When journaling is used on the target system. it is recommended to be performed on both (*BOTH) the target system and source system. or both. *SAME. the target journal state is set to active when the data group is started. *BOTH. i5/OS Option 42 . An additional value. Make sure that you are aware of these restrictions before using journal standby state or journal caching in your MIMIX environment. the TGTSTATE parameter specifies the requested status of the target journal.HA Journal Performance. but most journal entries are prevented from being deposited into the target journal. Restrictions of high availability journal performance enhancements MIMIX support of IBM’s high availability performance enhancements has a unique set of restrictions and high availability considerations. is required in order to use MIMIX support of IBM’s high availability performance enhancements. or *SRC. *SAME. The recommended value of *BOTH is the default. The JRNCACHE parameter specifies whether the system should cache journal entries in main storage before writing them to disk. which indicates the JRNCACHE value should remain unchanged. For journals in standby mode. Each system in the replication environment must have this software installed and be up to date with the latest PTFs and service packs applied. Also be aware of the following additional restrictions: 311 . commitment control entries are not sent to or deposited in the journal. When using journal standby state or journal caching. MIMIX support of IBM’s high availability performance enhancements can be configured on the target system even if commitment control is being used on the source system. is valid for the CHGJRNDFN command. Note: MIMIX does not use commitment control on the target system. with the exception of referential constraint types of *RESTRICT. *NONE. Valid values for the JRNCACHE parameter are *TGT. See “Creating a journal definition” on page 192 and “Changing a journal definition” on page 194. • Do not use these high availability performance enhancements in conjunction with referential constraints. is valid for the CHGJRNDFN command. objects are journaled on the target system. Although journal caching can be configured on the target system. which indicates the TGTSTATE value should remain unchanged. source system. Requirements of high availability journal performance enhancements Feature 5117.Configuring for high availability journal performance enhancements Change Journal Definition (CHGJRNDFN) commands: Target journal state (TGTSTATE) and Journal caching (JRNCACHE). Journal caching is also not allowed on remote journals. Do not use MIMIX support of IBM’s high availability performance enhancements in a cascading environment.• Do not change journal standby state or journal caching on IBM-supplied journals. These journal names begin with “Q” and reside in libraries which names also begin with “Q” (not QGPL). Attempting to change these journals results in an error message. • • 312 . Do not place a remote journal in journal standby state. RCVJRNE_delay_values') Notes: • The four RCVJRNE delay values are specified in this string along with the cache size. Type the following command: CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE('xx. If the data area is not created or does not exist in the MIMIX product library. such as PF.Caching extended attributes of *FILE objects Caching extended attributes of *FILE objects In order to accurately replicate actions against *FILE objects. The maximum size of the cache is controlled though the use of a data area in the MIMIX product library. Specify the cache size (xx). See topic “Increasing data returned in journal entry blocks by delaying RCVJRNE calls” on page 314 for more information. The cache size indicates the number of entries that can be contained in the cache. The result is a potential reduction of CPU consumption by the object send job and a significant performance improvement. Create the data area on the systems on which the object send jobs are running. To configure the extended attribute cache. • Using 00 for the cache size value disables the extended attribute cache. Whenever large volumes of journal entries for *FILE objects are replicated from the security audit journal (system journal). This function can be tailored to suit your environment. the size of the cache defaults to 15. Type the following command: CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR) LEN(2) 2. 313 . LF or DSPF. it is sometimes necessary to retrieve the extended attribute of a *FILE object. MIMIX caches this information for a fixed set of *FILE objects to prevent unnecessary retrievals of the extended attribute. Valid cache values are numbers 00 through 99. do the following: 1. a small block is defined as any journal entry block consisting of 10 percent of the RCVJRNE block size of 200 Kb. as well as specify the percentage of that block size to use for both a small delay block and a medium delay block in the data area. Medium blocks are then delayed for 1 second assuming the default RDRWAIT value is used. a medium block is defined as any journal entry block containing between 10 and 30 percent of the RCVJRNE block size. also reduces CPU consumption by the object send job. The following defines each segment and includes the number of characters that particular segment can contain: DTAARA VALUE(‘cache_size2.01. Journal entries are received in configurable-sized blocks that have a default size of 99.000 bytes. between 20. Through additional controls added to the MXOBJSND *DTAARA objects within the MIMIX installation library. The RCVJRNE block size is specified in kilobytes. If not specified.001 and 60. unnecessary overhead is created. the default size is 99. See “Caching extended attributes of *FILE objects” on page 313 for related information. which determines how long the previously specified journal entry block is delayed. Doing so increases the probability of receiving a full journal entry block and improves object send performance—reducing the number of RCVJRNE calls while simultaneously increasing the quantity of data returned in each block. let us assume the following: DTAARA VALUE(‘15. medium_multiplier2.999 bytes. This delay. block_size4’) To illustrate the effect of specific delay and multiplier values. ranging from 32 Kb to 4000 Kb. Each block segment is followed by a multiplier value. small_multipler2.999 bytes. These values are added in segments to the string of characters used by the file attribute cache size.02. medium_block_percentage2.Increasing data returned in journal entry blocks by delaying RCVJRNE calls Enhancements have been made to MIMIX to increase the performance of the object send job when a small number of journal entries are present during the Receive Journal Entry (RCVJRNE) call. Understanding the data area format This enhancement allows you to provide byte values for the block size to receive data from RCVJRNE. Similarly. The RDRWAIT default value is 1 second.000 bytes. The duration of the delay is the multiplier value multiplied by the value specified on the Reader wait time (seconds) (RDRWAIT) parameter in the data group definition. small_block_percentage2. or 20. When multiple RCVJRNE calls are performed and each block retrieved is less than 99. you can now specify the size of the block of data received from RCVJRNE and delay the next RCVJRNE call based on a percentage of the requested block size. small journal entry blocks will be delayed for 2 seconds before the next RCVJRNE call. Assuming the RDRWAIT default is in effect. along with the extended file attribute cache capability.10. 314 .30.0200’) In this example.999 bytes (100 Kb -1). 25%=221. If the resulting messages indicate that you are processing full journal entry blocks. Large=46. 60%=5.000 bytes. Medium=60 LVI0001 OM2120 Block Counts: Small=129. there is no need to add a delay to the RCVJRNE call. By default. it is recommended that you contact a Certified MIMIX Consultant for assistance with running object send processing with diagnostic messages enabled.000 bytes. Configuring the RCVJRNE call delay and block values To configure the delay and block values when retrieving journal entry blocks. Full=1 OM2120 File Attr Cache: Size= 30. Making changes to the delay multiplier or altering the small or medium block size specification would probably make sense in this scenario. 50%=4. no delays will be applied to blocks larger than 30 percent of the RCVJRNE block size.000 bytes and the medium block value set to 20. do the following: Note: Prior to configuring the RCVJRNE call delay. Type the following command: CRTDTAARA DTAARA(installation_library/MXOBJSND) TYPE(*CHAR) LEN(20) Note: Although you will see improvements from the file attribute cache with the default character value (LEN(2)). Note: Reviewing these messages can also be helpful once you have changed the default values. In the previous example.999 bytes. Review the set of LVI0001 messages returned as a result. no cache lookup attempts In the above example. Note that a block is considered full when the next journal entry in the sequence cannot fit within the size limitations of the block currently being processed. 15%=56. 636 blocks were sent but only one of the sent blocks were full. Recommendations for changing the block size values are provided in “Configuring the RCVJRNE call delay and block values” on page 315.Range Counts: 40%=10. the RCVJRNE block size is 99. Full=1 LVI0001 OM2120 Using RCVJRNE Block Size (in Kb): 200 LVI0001 OM2120 . to ensure that the object send job is operating efficiently. In this case. Create the data area on the systems on which the object send jobs are running. 90%=1.Increasing data returned in journal entry blocks by delaying RCVJRNE calls Note: Delays are not applied to blocks larger than the specified medium block percentage. 1.Range Counts: 0%=80. or 60. carefully read the information provided in “Understanding the data area format” on page 314 and “Determining if the data area should be changed” on page 315. 70%=3. 2%=28. 20%=161. The following are examples of LVI0001 messages: LVI0001 OM2120 Block Sizes (in Kb): Small=20. Medium=461. Determining if the data area should be changed Before changing the data area. 5%=21. the object send job is already running as efficiently as possible. with the small block value set to 5. 80%=0. 30%=23 LVI0001 OM2120 . enhancements are maximized by 315 . 10%=23. Recommendations for changing the block size values are provided in “Configuring the RCVJRNE call delay and block values” on page 315.01. Making changes to the delay multiplier or altering the small or medium block size specification would probably make sense in this scenario. see “Caching extended attributes of *FILE objects” on page 313.0100’) Note: For information about the cache size.recreating the MXOBJSEND data area as a LEN(20) to use the RCVJRNE call delays.02. 636 blocks were sent but only one of the sent blocks were full. 2. In the above example. CHGDTAARA DTAARA(installation_library/MXOBJSND) VALUE(‘cache_size.10. 316 .30. For each journal entry. MIMIX contains redundancy logic that eliminates multiple journal entries for the same object when the entire object is replicated.Configuring high volume objects for better performance Configuring high volume objects for better performance Some objects. MIMIX then individually applies each *DTAQ entry using the QSNDDTAQ API. Defaults can be used for the other object data group jobs. such as data areas and data queues can have significant activity against them and can cause MIMIX to use significant CPU resource. 317 . you should: • • Place all *DTAQs in the same object-only data group Limit the maximum number of object retrieve jobs for the data group to one. system journal replication processes package all of the entries of the *DTAQ and sends it to the apply system. then several object retrieve jobs could be started (up to the maximum configured) to handle the activity against the *DTAQ. When you configure a data group for system journal replication. One or several programs can use the QSNDDTAQ and QRCVDTAQ APIs to generate thousands of journal entries for a single *DTAQ. If the data group is configured for multiple Object retrieve processing (OBJRTVPRC) jobs. When a numeric value is specified. Each database apply session is evaluated against the threshold independently. the comparison is performed. A numeric value for the CMPRCDCMT parameter defines the maximum number of uncommitted record operations that can exist for files waiting to be applied in an apply session at the time a compare record count request is invoked. When a threshold is specified for the CMPRCDCNT commit threshold policy. If your environment cannot tolerate a long-running request. MIMIX will not attempt to compare members from that apply session. The shipped default values for this policy (CMPRCDCMT parameter) permit record count comparison requests without regard to commit cycle activity on the source system. In such an environment. If the threshold is exceeded. you can specify a numeric value for the CMPRCDCMT parameter for either the MIMIX installation or for a specific data group. Note: Equal record counts suggest but do not guarantee that files are synchronized. as illustrated in the following example. record count comparisons can have a higher number of file members that are not compared. or that can tolerate a long-running comparison. 318 . and can improve performance of #MBRRCDCNT and CMPRCDCNT requests. As a result. If an apply session has not reached the threshold. This is possible in environments that use commitment control with long-running commit transactions that include large numbers (tens of thousands) of record operations within one transaction. it is possible for record counts to be compared for files in one apply session but not be compared in another apply session. indicating that commit cycle activity on the source system prevented active processing from comparing counts of current records and deleted records in the selected member. the results will display the *CMT value for the difference indicator. MIMIX recognizes whether the number of uncommitted record operations for an apply session exceeds the threshold at the time a compare request is invoked. The Set MIMIX Policies (SETMMXPCY) command includes the policy CMPRCDCNT commit threshold policy (CMPRCDCMT parameter) that provides the ability to specify a threshold at which requests to compare record counts will no longer perform the comparison due to commit cycle activity on the source system. a request to run the #MBRRCDCNT audit or the Compare Record Count (CMPRCDCNT) command can be extremely long-running. in some conditions. These policy default values are suitable for environments that do not have the commitment control environment indicated. This must be taken into consideration when using the comparison results to gauge of whether systems are synchronized. This will change the behavior of MIMIX by affecting what is compared.Improving performance of the #MBRRCDCNT audit Environments that use commitment control may find that. The number specified must be representative of the number of uncommitted record operations. Instead. the compare request can be long running when the number of members to be compared is very large and there are uncommitted changes present at the time of the request. files processed by apply sessions A and C are not compared. Table 42.000 50 500 Apply Session Total > 10. Table 42 shows the files replicated by each of the apply sessions used by the data group and the result of comparison.000 0 7. Because of the number of uncommitted record operations present at the time of the request. *CMT Compared Compared Result 319 . Apply Session A B C D Sample results with a policy threshold value of 10.000 > 10.000.000 6. *CMT Not compared.Improving performance of the #MBRRCDCNT audit Example: This example shows the result of setting the policy for a data group to a value of 10.000 0 5.000 < 10. *CMT Compared Compared Not compared. *CMT Not compared. Files Uncommitted Record Operation Per File A01 A02 B01 B02 C01 C02 D01 D02 11.000 Not compared.000.000 < 10. are included. This topic includes several examples. This topic also describes how to configure keyed replication at the data group or file entry level as well as how to verify key attributes. file merging. Requirements and considerations for replication of triggers. “Trigger support” on page 334 describes how MIMIX handles triggers and how to enable trigger support. This topic also describes delete rules for referential constraints that can cause dependent files to change and MIMIX considerations for replication of constraint-induced modifications. “Collision resolution” on page 345 describes available support within MIMIX to automatically resolve detected collisions without user intervention and its requirements. “Handling SQL identity columns” on page 338 describes the problem of duplicate identity column values and how the Set Identity Column Attribute (SETIDCOLA) command can be used to support replication of SQL tables with identity columns. including considerations for synchronizing files with triggers.Configuring advanced replication techniques CHAPTER 16 Configuring advanced replication techniques This chapter describes how to modify your configuration to support advanced replication techniques for user journal (database) and system journal (object) replication. • • • • • System journal replication: The following topics describe advanced techniques for system journal replication: • “Omitting T-ZC content from system journal replication” on page 350 describes considerations and requirements for omitting content of T-ZC journal entries from replicated transactions for logical and physical files. Requirements and limitations of the SETIDCOLA command as well as alternative solutions are included. file sharing. “Configuring to replicate SQL stored procedures and user-defined functions” on • • 320 . broadcasting. User journal replication: The following topics describe advanced techniques for user journal replication: • “Keyed replication” on page 322 describes the requirements and restrictions of replication that is based on key values within the data. “Data distribution and data management scenarios” on page 327 defines and identifies configuration requirements for the following techniques: bi-directional data flow. and cascading. file combining. “Selecting an object retrieval delay” on page 354 describes how to set an object retrieval delay value so that a MIMIX lock on an object does not interfere with your applications. This topic also describes how to define and work with collision resolution classes. “Constraint support” on page 336 identifies the types of constraints MIMIX supports. page 356 describes the requirements for replicating these constructs and how configure MIMIX to replicate them. 321 . • “Using Save-While-Active in MIMIX” on page 358 describes how to change type of save-while-active option to be used when saving objects. You can view and change these configuration values for a data group through an interface such as SQL or DFU. allowing replication to be based on key values within the data instead of by the position of the data within the file. You also need to be aware that data “collisions” can occur when an attempt is made to simultaneously update the same data from two different sources. When the file on the source system is updated. MIMIX user journal replication processes use positional replication. you must use the value *BOTH. However. Key replication support is subject to the requirements and restrictions described. file routing. MIMIX finds the data in the exact location on the target system and updates that data with the changes. If the unique key fields of the database file are updated by applications. User journal replication processes support the update of files by key. You can change from positional replication to keyed replication for database files. If data exists in a file on the source system. some configurations require both beforeimages and after-images. such as file sharing.The access path can be either part of the physical file itself or it can be defined in a logical file dependent on the physical file.MIMIX may need to be configured so that both before and after images of the journal transaction are placed in the journal. It is recommended that you use the Journal image value of *BOTH whenever there are file entries with keyed replication to prevent before images from being filtered out by the database send process.At least one unique access path must exist for the file being replicated.Keyed replication By default. Keyed vs positional replication In data groups that are configured for user journal replication. but you may notice greater CPU usage when MIMIX must search each file for the specified key. Keyed file replication offers a greater level of flexibility. in the file member. Positional replication is recommended for most high availability requirements. Unique access path . The Journal image element of the File and tracking entry options (FEOPT) parameter controls which journal images are placed in the journal. Positional file replication provides the best performance. 322 . Default values result in only an after-image of the record. In positional file replication. default values use positional replication. data on the target system is identified by position. The Journal image value specified in the data group definition is in effect unless a different value is specified for the FEOPT parameter in a file entry or object entry. Keyed replication is best used for more flexible scenarios. or file combining. Requirements for keyed replication Journal images . an exact copy must exist in the same position in a file on the target system. or relative record number (RRN). the journal and journal definition cannot be configured to allow object types to support minimized entry specific data. Restrictions of keyed replication MIMIX does not support keyed replication in data groups that are configured for MIMIX Dynamic Apply. For more information. If you configure a data group for keyed replication. Implementing keyed replication You can implement keyed replication for an entire data group or for individual data group file entries. the data group must be configured so that cooperative processing activity occurs primarily through the system journal. For more information. See “Verifying key attributes” on page 326. The Compare File Data (CMPFILDTA) command cannot compare files that are configured for keyed replication. Ensure the data group is configured so that cooperative processing occurs primarily through the system journal.Keyed replication You can use the Verify Key Attributes (VFYKEYATR) command to determine whether a physical file is eligible for keyed replication. the files are excluded from the comparison and a message indicates that files using *KEYED replication were not processed. Data groups configured for MIMIX Dynamic Apply have *USRJRN specified as the value for the Cooperative Journal (COOPJRN) parameter. see “Minimized journal entry data” on page 307. Attempting to change from keyed to positional replication can result in a mismatch of the relative record numbers (RRN) between the target system and source system. a warning message will be returned that indicates that the position of the file may not match the position of the file on the backup system. the values you define in the data group file entry override the defaults used by the data group for the associated file. Before you change a data group definition to support keyed replication. When keyed replication is in use. MIMIX uses keyed replication as the default for all processing of all associated data group file entries. Changing a data group configuration to use keyed replication You can define keyed replication for a data group when you are initially configuring MIMIX or you can change the configuration later. Attention: If you attempt to change the file replication from *KEYED to *POSITION. If you run the the #FILDTA audit or the CMPFILDTA command against keyed files. see “Checklist: Converting to legacy cooperative processing” on page 138. To use keyed replication for all database replication defined for a data group. Consult your MIMIX administrator to 323 . do the following: a. the following requirements must be met: 1. To replicate files that are updated by key. If you configure individual data group file entries for keyed replication. The files identified by the data group file entries for the data group must be eligible for keyed replication. If the files are not currently journaled correctly. Consult your MIMIX administrator to verify this. If you have modified file entry options on individual data group file entries. Changing a data group file entry to use keyed replication By default. In the data group definition used for replication you must specify the following: • • • • • Data group type of *ALL or *DB. Verify that you have the value you need specified for the Journal image element of the File and tracking ent. for example the data group file entry is not set as described in Step 4. If you want to use keyed replication for one or more individual data group file entries defined for a data group. the values you define in the data group file entry override the defaults used by the data group for the associated file. The data group must be configured so that cooperative processing occurs primarily through the system journal. For more information. see “Checklist: Converting to legacy cooperative processing” on page 138. b. DB journal entry processing must have Before images as *SEND for source send configurations. If the file is not being journaled correctly. Refer to the checklist referenced in Step 1a. c. 2. Start journaling for the file entries using “Starting journaling for physical files” on page 297. 5. data group file entries use the same file entry options as specified in the data group definition. See topic “Verifying Key Attributes” in the Using MIMIX book. If you configure individual data group file entries for keyed replication. you need to end journaling for the file entries defined to the data group. 3. ensure the following: a. you need to ensure that the values used are compatible with keyed replication. Do not continue until this is verified. *SYSJRN for the Cooperative journal (COOPJRN) parameter. Use topic “Ending Journaling” in the Using MIMIX book. For more information. options. b. File and tracking ent. you will need to end journaling for the file entries. Before you change a data group file entry to support keyed replication. see “Checklist: Converting to legacy cooperative processing” on page 138. 4. options must specify *KEYED for the Replication type element.verify this. When using remote journaling. The data group definition used for replication must have a Data group type of 324 . all journal entries are sent. you need the following: 1. *BOTH is recommended. Verify that the files defined to the data group are journaled correctly. 2. 4. When using remote journaling. the data group file entries can use the default value *DGDFT for both Journal image and Replication type. See topic “Verifying Key Attributes” in the Using MIMIX book. The files identified by the data group file entries for the data group must be eligible for keyed replication. 325 . DB journal entry processing must have Before images as *SEND for source send configurations. you need to start journaling for the file entries using “Starting journaling for physical files” on page 297.Keyed replication *ALL or *DB. all journal entries are sent. After you have changed individual data group file entries. • Use topic “Adding a data group file entry” on page 252 to create a new file entry. If you are modifying existing file entries in this way. • Use topic “Changing a data group file entry” on page 253 to modify an existing file entry. The data group file entry must have File and tracking ent. If you are using keyed replication at the data group level. 5. 6. 3. verify that you have the value you need specified for the Journal image (*BOTH is recommended) and specify *KEYED for the Replication type. • Note: You can use any of the following ways to configure data group file entries for keyed replication: • Use either procedure in topic “Loading file entries” on page 246 to add or modify a group of data group file entries. you should specify *UPDADD for the Update option parameter. options set as follows: • To override the defaults from the data group definition to use keyed replication on only selected data group file entries. To verify all files in a library. On a command line. specify *MIMIXDFN for the File prompt and press Enter. 326 . Do one of the following: • • • To verify a file in a library. You can use keyed replication for the file if *BOTH appears in the Replication Type Allowed column. 4. The Verify Key Attributes display appears. Press Enter. 2. specify a file name and a library. Display the spooled file (WRKSPLF command) or use your standard process for printing. verify that the file or files you for which you want to use keyed replication are actually eligible. specify *ALL and a library. Prompts for the Data group definition appear. To verify files associated with the file entries for a data group. If a value appears in the Replication Type Defined column. A spooled file is created that indicates whether you can use keyed replication for the files in the library or data group you specified. Specify the name of the data group that you want to check. type VFYKEYATR (Verify Key Attributes). the file is already defined to the data group with the replication type shown. 3. Do the following to verify that the attributes of a file are appropriate for keyed replication: 1.Verifying key attributes Before you configure for keyed replication. MIMIX implements file sharing among systems defined to the same MIMIX installation. but their implementations and configuration requirements are distinct. You may need to consider the technical aspects of implementing a technique as well as how your business practices may be affected. this support requires a combination of advanced replication techniques as well as customizing. System journal replication processing supports the bi-directional flow of objects between two systems. but it does not support simultaneous (bi-directional) updates to the same object on multiple systems. file sharing.Data distribution and data management scenarios Data distribution and data management scenarios MIMIX supports a variety of scenarios for data distribution and data management including bi-directional data flow. and cascading. collision resolution classes allow you to specify different resolution methods for each collision point. • File sharing is a scenario in which a file can be shared among a group of systems and can be updated from any of the systems in the group. MIMIX provides options within the data group definition and for individual data group file entries for resolving most collision points. and file merging. each system is both a source system and a target system. Additionally. bi-directional data flow is a data sharing technique in which the same named database file can be replicated between databases on two systems in two directions at the same time. To enable file sharing. Often. These techniques require additional planning before you configure MIMIX. MIMIX must be configured to allow bi-directional data flow. Bi-directional requirements: system journal replication To configure system journal replication processes to support bi-directional flow of objects. you need the following: 327 . When MIMIX user journal replication processes are configured for bi-directional data flow. • In user journal replication processing. Also. An example of file sharing is when an enterprise maintains a single database file that must be updated from any of several systems. Consider the following: • • • • • Can each system involved modify the data? Do you need to filter data before sending to it to another system? Do you need to implement multiple techniques to accomplish your goal? Do you need customized exit programs? Do any potential collision points exist and how will each be resolved? MIMIX user journal replication provides filtering options within the data group definition. Configuring for bi-directional flow Both MIMIX user journal and system journal replication processes allow data to flow bi-directionally. MIMIX also supports data distribution techniques such as broadcasting. Updating the same object from two systems at the same time can cause a loss of data integrity. file combining. set the DB journal entry processing (DBJRNPRC) parameter so that its Generated by MIMIX element is set to *IGNORE. In the other data group. • • • 328 . Consider the following: – Can the collision be resolved using the collision resolution methods provided in MIMIX or do you need customized exit programs? See “Collision resolution” on page 345. Bi-directional requirements: user journal replication To configure user journal replication processes to support bi-directional data flow. say that you have an order entry application that updates shared inventory records such as Figure 19. to be replicated back to its original source system. In the other data group. Analyze your environment to determine the potential collision points in your data. Each data group definition should specify *NO for the Allow to be switched (ALWSWT) parameter. If two locations attempt to access the last item in stock at the same time. In one data group. you need the following: • Configure two data group definitions between the two systems. Use topics “Keyed replication” on page 322 and “Verifying key attributes” on page 326 to determine if files can use keyed replication. which location will be allowed to fill the order? Does the other location automatically place a backorder or generate a report? Figure 19. You need to understand how each collision point will be resolved. • Note: In system journal replication. Once an object is replicated to a target system. The files defined to each data group must be configured for keyed replication. This prevents any journal entries that are generated by MIMIX from being sent to the target system and prevents looping. system journal replication processes prevent looping by not allowing the same object.• Configure two data group definitions between the two systems. MIMIX does not support simultaneous updates to the same object on multiple systems and does not support conflict resolution for objects. specify *SYS1 for the Data source (DTASRC) parameter. In one data group. specify *SYS1 for the Data source (DTASRC) parameter. For each data group definition. specify *SYS2 for this parameter. regardless of name mapping. Example of bi-directional configuration to implement file sharing. specify *SYS2 for this parameter. – How will your business practices be affected by collision scenarios? For example. The example in Figure 20 329 . MIMIX implements file combining between multiple source systems and a target system that are defined to the same MIMIX installation. see topic “Configuring for bi-directional flow” on page 327. Will users update the data from only one or both systems? If users can update data from both systems. The way in which data is used can affect the configuration requirements for a file routing or file combining operation. An example of file combining is when many locations within an enterprise update a local file and the updates from all local files are sent to one location to update a composite file. • • File combining is a scenario in which all or partial information from files on multiple systems can be sent to and combined in a single file on a target system. Evaluate the needs for each pair of systems (source and target) separately. Evaluate the requirements for each technique separately for a pair of systems (source and target). Consider the following: • Does the data need to be updated in both directions between the systems? If you need bi-directional data flow. you need to prevent the original data from being returned to its original source system (recursion).Data distribution and data management scenarios Configuring for file routing and file combining File routing and file combining are data management techniques supported by MIMIX user journal replication processes. Is the file routing or file combining scenario a complete solution or is it part of a larger solution? Your complete solution may be a combination of multiple data management and data distribution techniques. MIMIX determines what data from the multiple source files is sent to the target system based on the contents of a journal transaction. In its user journal replication processes. Each technique that you need to implement may have different configuration requirements. you need to prevent it from returning to the system on which it originated. the file combining operation effectively becomes a file routing operation. • • File routing is a scenario in which information from a single file can be split and sent to files on multiple target systems. The user exit program determines what data from the source file is sent to each of the target systems based on the contents 330 . To enable file routing. MIMIX user journal replication must be configured as follows: • • Configure the data group definition for keyed replication.shows file combining from multiple source systems onto a composite file on the management system. In user journal replication processes. To ensure that the data group will perform file combining operations after a switch. After the combining operating is complete. If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs. See topic “Keyed replication” on page 322. If only part of the information from the source system is to be sent to the target system. Example of file combining To enable file combining between two systems. MIMIX calls a user exit program that makes the file routing decision. you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication. if the combined data will be replicated or distributed again. you need an exit program to filter out transactions that should not be sent to the target system. MIMIX implements file routing between a source system and multiple target systems that are defined to the same MIMIX installation. Figure 20. The example in Figure 21 shows the management system routing only the information relevant to each network system to that system. the file routing operation effectively becomes a file combining operation. An example of file routing is when one location within an enterprise performs updates to a file for all other locations. the paths differ in their implementation. MIMIX supports cascading in both its user journal and system journal replication paths. MIMIX user journal replication processes must be configured as follows: • • • Configure the data group definition for keyed replication. If you allow the data group to be switched (by specifying *YES for Allow to be switched (ALWSWT) parameter) and a switch occurs. Example of file routing To enable file routing. you need an exit program that allows the appropriate transactions to be processed regardless of which system is acting as the source for replication. but only updated information relevant to a location is sent back to that location. However. See topic “Keyed replication” on page 322. The data group definition must call an exit program that filters transactions so that only those transactions which are relevant to the target system are sent to it. Configuring for cascading distributions Cascading is a distribution technique in which data passes through one or more intermediate systems before reaching its destination. Figure 21. 331 .Data distribution and data management scenarios of a journal transaction. To ensure that the data group will perform file routing operations after a switch. you need to use keyed replication. Example of a simple cascading scenario To enable cascading you must have the following: • • Within a MIMIX installation. Note: Once an object is replicated to a target system. To do this. you also need the following: – The data groups should be configured to send journal entries that are generated by MIMIX. specify *SEND for the Generated by MIMIX element of the DB journal entry processing (DBJRNPRC) parameter. Exit programs are required for the • 332 .Data can pass through one intermediate system within a MIMIX installation. Figure 23 shows an example where the Chicago system is a management system in a MIMIX installation that collects data from the network systems and broadcasts the updates to the other participating systems. Figure 23 is a cascading scenario because changes that originate on the Hong Kong system pass through an intermediate system (Chicago) before being distributed to the Mexico City system and other network systems in the MIMIX installation. Configure a data group between the originating system (a network system) to the intermediate (management) system. Cascading may be used with other data management techniques to accomplish a specific goal. When this is the case. Figure 22. Figure 22 shows the basic cascading configuration that is possible within one MIMIX installation. Additional MIMIX installations will allow you to support cascading in scenarios that require data to flow though two or more intermediate systems before reaching its destination. MIMIX system journal replication processes prevent looping by not allowing the same object. MIMIX performs the database updates. the management system must be the intermediate system. to be replicated back to its original source system. For user journal replication. The network systems send unfiltered data to the management system. regardless of name mapping. – If it is possible for the data to be routed back to the originating or any intermediate systems. Configure another data group for the flow from the intermediate (management) system to the destination system. Bi-directional example that implements cascading for file distribution. 333 .Data distribution and data management scenarios data groups acting between the management system and the destination systems and need to prevent updates from flowing back to their system of origin. Figure 23. the object replication will replicate trigger-induced object changes from the source system. However. When positional replication is used and triggers fire on the target system they can cause trigger-induced modifications to the files being replicated. You should prevent the triggers from firing on the target system and replicate the trigger-induced modifications from source to the target system.Trigger support A trigger program is a user exit program that is called by the database when a database modification occurs. Because of this. Trigger programs can be used to make other database modifications which are called trigger-induced database modifications. The value of the relative record number used in the journal entry is used to locate a database record being updated or deleted. the files being updated by these triggers should be replicated using the same apply session as the parent files to avoid lock contention. if non-database changes occur and you are using object replication. How MIMIX handles triggers The method used for handling triggers is determined by settings in the data group definition and file entry options. A slight performance advantage may be achieved by replicating the triggerinduced modifications instead of ignoring them and allowing the triggers to fire. the triggers should not be permitted to fire. Considerations when using triggers You should choose only one of these methods for each data group file entry. triggers should not be allowed to fire on the target system. Which method you use depends on a variety of considerations: • The default replication type for data group file entry options is positional replication. Ignoring trigger-induced modifications found in the replication stream and allowing the triggers to fire on the target system. you may want the triggers to fire on the target system. These trigger-induced modifications can change the relative record number of the records in the file because the relative record numbers of the trigger-induced modifications are not likely to match the relative record numbers generated by the same triggers on the source system. you may choose to allow them to fire on the target system. MIMIX supports database trigger replication using one of the following ways: • • Using IBM i trigger support to prevent the triggers from firing on the target system and replicating the trigger-induced modifications. each file is replicated based on the position of the record within the file. When trigger-induced modifications are made by replicated files to files not replicated by MIMIX. This will ensure that the files that are not replicated receive the same trigger-induced modifications on the target system as they do on the source system. When triggers are allowed to fire on the target system. In this case. With positional replication. When triggers do not cause database record changes. • • • • 334 . you must specify *DATA on the Sending mode parameter on the Synchronize DG File Entry (SYNCDGFE) command. Synchronizing files with triggers When you are synchronizing a file with triggers and you are using MIMIX trigger support. which will use the value indicated for the data group file entry. Enabling trigger support Trigger support is enabled for user journal replication by specifying the appropriate file entry option values for parameters on the Create Data Group Definition (CRTDGDFN) and Change Data Group Definition (CHGDGDFN) commands. triggers will be disabled on the target system during synchronization. If you specify *YES. you can specify if you want the triggers disabled on the target system during file synchronization. A value of *NO will leave triggers enabled. see “About synchronizing file entries (SYNCDGFE command)” on page 439. For more information on synchronizing files with triggers. 335 . and firing the trigger adds additional overhead to database processing. If you already have a trigger solution in place you can continue to use that implementation or you can use the MIMIX trigger support. You can also enable trigger support at a file level by specifying the appropriate file entry options associated with the file. The default is *DGFE. On the Disable triggers on file parameter.Trigger support This is because the database apply process checks each transaction before processing to see if filtering is required. Referential constraints defined with the following delete rules cause dependent files to change: • • *CASCADE: Record deletion in a parent file causes records in the dependent file to be deleted when the parent key value matches the foreign key value. unique.Constraint support A constraint is a restriction or limitation placed on a file. all null capable fields in the foreign key are set to null. Because the parent record was already applied. For those dependent records that meet the preceding criteria. *SETNULL: Record deletion in a parent file updates those records in the dependent file where the value of the parent non-null key matches the foreign key value. There are four types of constraints: referential. With the exception of files that have been placed on hold. For example. however. MIMIX always enables constraints and applies constraint entries. Foreign key fields with the non-null attribute are not updated. these constraint changes will be replicated to the target system. For those dependent records that meet the preceding criteria. removed or changed on files replicated by MIMIX. primary key and check constraints are single file operations transparent to MIMIX. Unique. • Referential constraints with delete rules Referential constraints can cause changes to dependent database files when the parent file is changed. you could use a referential constraint to: • • Ensure when an employee record is added to a personnel file that it has an associated department from a company organization file. the same constraint will be met for the replicated database operation on the target. Referential constraints. MIMIX tolerates mismatched before images or minimized journal entry data CRC failures when applying constraint-generated activity. the foreign key field or fields are set to their corresponding default values. To use this support: • Ensure that your target system is at the same release level or greater than the source system to ensure the target system is able to use all of the IBM i function that is available on the source system. *SETDFT: Record deletion in a parent file updates those records in the dependent file where the value of the parent non-null key matches the foreign key value. ensure the integrity between multiple files. If an earlier IBM i level is installed on the target system the operation will be ignored. • 336 . If a constraint is met for a database operation on the source system. entries with mismatched before images are applied and entries with minimized journal entry data CRC failures are ignored. You must have your MIMIX environment configured for either MIMIX Dynamic Apply or legacy cooperative processing. Empty a shopping cart and remove the order records if an internet shopper exits without placing an order. When constraints are added. primary key and check. 00. 1. Earlier levels of MIMIX provided the Process constraint entries element in the File entry options (FEOPT) parameter. enabling the same constraints on the target system will allow changes to be made to the dependent files. When referential constraints cause changes to dependent files not replicated by MIMIX. 337 . this can be done through automation. This would cause a significant performance impact on large files and could impact switch performance.Constraint support Referential constraint handling for these dependent files is supported through the replication of constraint-induced modifications. which now is removed. MIMIX does not provide the ability to disable constraints because IBM i would check every record in the file to ensure constraints are met once the constraint is reenabled. The considerations for replication of constraint-induced modifications are: • • Files with referential constraints and any dependent files must be replicated by the same apply session.08.0. This element was removed in version 5 service pack 5. Replication of constraint-induced modifications MIMIX always attempts to apply constraint-induced modifications. If the need exists.1 Any previously specified value is now mapped to *YES so that processing always occurs. the value used for the next row added. However. • Nothing prevents identity column values from being generated more than once. Following certain actions which transfer table data from one system to another. 338 . Cycle/No Cycle. including one that is outside of the range defined by the minimum and maximum values. rows can be inserted on the backup system using identity column values other than the next expected value. Any SQL table with an identity column that is replicated by a switchable data group can potentially experience this problem. Cache amount. unique key and set to not cycle. This can occur after a MIMIX switch and after other actions such as certain save/restore operations on the backup system. Journal entries used to replicate inserted rows on the production system do not contain information that would allow the value generator to remain synchronized. in typical usage. This can be any value.the amount by which each new row’s identity column differs from the previously inserted row. the identity column is also a primary. incrementing the value with each insertion. Maximum value. Cycle/No Cycle . also do not keep the value generator synchronized. This can result in the reuse of identity column values which in turn can cause a duplicate key exception.indicates whether or not values cycle from maximum back to minimum. The value generator for the identity column is stored internally with the table. prevents the problem from occurring. Similarly. Several attributes define the behavior of the identity column. This can be a positive or negative value. including: Minimum value. or from minimum to maximum if the increment is negative. This command is useful for handling scenarios that would otherwise result in errors caused by duplicate identity column values when inserting rows into tables. The starting value for the value generator on the backup system is used instead of the next expected value based on the table’s content. This discussion is limited to the following attributes: • • Increment amount . When rows are inserted into the table. if possible. The result is that after a switch to the backup system. Increment amount. Start value . In some cases. a table may have a single numeric column which is designated an identity column.Handling SQL identity columns MIMIX replicates identity columns in SQL tables and checks for scenarios that can cause duplicate identity column values after switching and. the database automatically generates a value for this column. identity columns will need to be processed by manually running the Set Identity Column Attribute (SETIDCOLA) command. the next identity column value generated on the receiving system may not be as expected. Start value. The identity column problem explained In SQL. other actions such as applying journaled changes (APYJRNCHG). After performing a switch to the backup system. run the command on the system on which you performed the restore. If you cannot use the SETIDCOLA command. 339 . it may not restart the identity column at a useful value. SETIDCOLA command limitations In general. By doing this. specify ACTION(*CHECKONLY) on the command. default values on the SETIDCOLA command are appropriate for use following a planned switch to the backup system to ensure that the identity column values inserted on the backup system start at the proper point. Limited support for unplanned switch ." For this scenario. the backup system may not be caught up with all the changes that occurred on the production system. and no cycles are allowed. see “Alternative solutions” on page 340. Careful selection of the value of the INCREMENTS parameter can minimize the likelihood of this problem. Examples of when you may need to run the SETIDCOLA command are: • The SETIDCOLA command can be used to determine whether a data group replicates tables which contain identity columns and report the results. For this scenario. Look in the Database section for the SQL Reference for CREATE TABLE and ALTER TABLE statements. You may want to perform this type of check whenever new tables are created that might contain identity columns. • • • Also. See “Examples of choosing a value for INCREMENTS” on page 342. To do so. run the command from the backup system before starting replication in the reverse direction. See “Checking for replication of tables with identity columns” on page 343. When the SETIDCOLA command is useful Important! The SETIDCOLA command should not be used in all environments. but the value chosen must be valid for all tables in the data group.Handling SQL identity columns Detailed technical descriptions of all attributes are available in the IBM eServer iSeries Information Center. After a restore (RSTnnn command) from a "save of backup machine. Before saving files to tape or other media from the backup system.Following an unplanned switch. the SETIDCOLA command is needed in any environment in which you are attempting to restore from a save that was created while replication processes were running. SETIDCOLA only works correctly for the most typical scenario where all values for identity columns have been generated by the system. you avoid the need to run the command after restoring. For many environments. Its use is subject to the limitations described in “SETIDCOLA command limitations” on page 339. It is recommended that you initially use this capability before setting values. In other scenarios. run the command from the backup system. Using the SETIDCOLA command on the backup system may result in the generation of identity column values that were used on the production system but not yet replicated to the backup system. An application may require that an identity column value never be generated twice. This must be handled manually. In this scenario.000. Because cycles are allowed. The application may also require that the value always locate either the original row or. the identity column values of the deleted rows will be re-generated for newly inserted rows. These application-generated identity values may be outside the minimum and maximum values set for the identity column. the value may be stored in a different table. you have these options. Manually reset the identity column starting point: Following a switch to the backup system. 340 .000. the restart value is reset to the beginning of the range. For example.000 but an application occasionally supplies values in the range of 200. For example. Because the result would be problematic.000.If there are no rows in the table on the backup system.000 through 500. the assumption is that duplicate keys will not be a problem. No rows in backup table . the restart value will be set to the initial start value. you can manually reset the restart value for tables with identity columns. if the row is deleted. Rows deleted on production table . • • • Alternative solutions If you cannot use the SETIDCOLA command because of its known limitations.Optionally. applications can supply identity column values at the time they insert rows into a table. The SETIDCOLA command cannot address this scenario.Not supported -The following scenarios are known to be problematic and are not supported. it must be handled manually. The SQL statement ALTER TABLE name ALTER COLUMN can be used for this purpose. This must be handled manually. given to another application. The SETIDCOLA command cannot address this scenario. data area or data queue. no row at all. the SETIDCOLA command is not recommended for tables which allow application-generated identity values. the ideal starting point would be wherever there is the largest gap between existing values. or given to a customer. Application generated values . a table’s identity column range may be from 1 through 100. If rows with values at the end of the range are deleted and you perform a switch followed by the SETIDCOLA command. The SETIDCOLA command is not recommended for this environment.000. However. Running the SETIDCOLA command on the backup system may result in re-generating values that were previously used. consider the “Alternative solutions” on page 340. it must be handled manually.If an identity column allows cycling and adding a row increments its value beyond the maximum range. unexpected behavior may occur when cycles are allowed and old rows are removed from the table with a frequency such that the identity column values never actually complete a cycle. • Columns that have cycled . If cycling is permitted and the SETIDCOLA command is run. the command would recognize the higher values from the application and would cycle back to the minimum value of 1. If you cannot use the SETIDCOLA command in your environment. The value specified is used for all tables which meet the criteria for processing by the command. Only tables which can be replicated by the specified data group are acted upon. Valid values are 1 through 2. You can specify as many as 30 jobs. *CHECKONLY The command checks for tables which have identity columns. This is the default value. message LVE3E2C will be issued. The data area for the sequence object must be configured for replication through the user journal (cooperatively processed). Be sure to read the information in “Examples of choosing a value for INCREMENTS” on page 342. If you use Lakeview-provided product-level security. If no tables are affected. The Action (ACTION) parameter specifies what action is to be taken by the command. the new restart value determines the identity column value for the next row added to the table. Following an unplanned switch. A table will only be updated by one job. is currently set to one job. The default value.647. the identity column starting values exceed the last values used prior to the switch or save/restore operation. Possible values are: *SET The command checks and sets the attribute of the identity column of each table which meets the criteria. number-of-increments-to-skip Specify the number of increments to skip. Only tables that are identified for replication by the specified data group are addressed. each job can update multiple tables.483. 341 . the minimum authority level for this command is *OPR. when applications are started. SQL sequence objects can be used instead of identity columns. The Number of increments to skip (INCREMENTS) parameter specifies how many increments of the counter which generates the starting value for the identity column to skip. It does not set the attributes of the identity columns. For each table. currently set to 1 increment. Possible values are: *DFT Skips the default number of increments. Following a planned switch where tables are synchronized. you can usually use *DFT. use a larger value to ensure that you skip any values used on the production system that may not have been replicated to the backup system. message LVI3E26 will be issued. The Number of jobs (JOBS) parameter specifies the number of jobs to use to process tables which meet the criteria for processing by the command. If there are affected tables. Sequence objects are implemented using a data area which can be replicated by MIMIX. Careful selection of values can ensure that.147. The result of the check is reported in the job log.Handling SQL identity columns Convert to SQL sequence objects: To overcome the limitations of identity column switching and to avoid the need to use the SETIDCOLA command. The Data group definition (DGDFN) parameter identifies the data group against which the specified action is taken. *DFT. SETIDCOLA command details The Set Identity Column Attribute (SETIDCOLA) command performs a RESTART WITH alteration on the identity column of any SQL tables defined for replication in the specified data group. If any statement fails. Each row added to table A increases the identity value by 1 and each row added to table B increases the identify value by 1. You need to approximate how long it would take for this value to become zero (0) if application activity were to be stopped on the production system. Specifying a higher number of jobs (JOBS) can reduce this time. results in an increment of 500. Evaluation of your environment to determine an appropriate increment value is highly recommended before using the command. if the rate of the fastest file is 1. this value is the size of the backlog. See “When the SETIDCOLA command is useful” on page 339 for details. consider the rate at which each table consumes its available identity values. Account for the needs of the table which consumes numbers at the highest rate. If you have available numbers to use. add a safety factor of at least 100 percent.25) being skipped. This is especially true when affected identity columns do not have indexes over them or when they are referenced by constraints. as well as any backlog in MIMIX processing and the activity causing you to run the command.Usage notes • • The reason you are using this command determines which system you should run it from. The SETIDCOLA library is not deleted so that it can be used for any error analysis. Adding 100% to 250. on the production system the latest value for table A was 75 and the latest value for table B was 30. Prior to a switch. Rows are inserted into table A at a rate of approximately 600 rows per hour. along with any error messages received.000.000 numbers per hour and MIMIX is 15 minutes behind (0. data group ORDERS contains tables A and B. Note: The MIMIX backlog. sometimes called the latency of changes being transferred to the backup system. Rows are inserted into table B at a rate of approximately 20 rows per hour. the value you specify for INCREMENTS needs to result in at least 250 numbers (1000 x 0.000. This command can be long running when many files defined for replication by the specified data group contain identity columns.25 hours). It does not include the time it takes for MIMIX to apply the entry. • • • Examples of choosing a value for INCREMENTS When choosing a value for INCREMENTS. For example. This command creates a work library named SETIDCOLA which is used by the command. RUNSQLSTM produces spooled files showing the ALTER TABLE statements executed. the RUNSQLSTM will also fail. the SETIDCOLA command builds RUNSQLSTM scripts (one for each job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts. Consider the following scenarios: 342 . and return the failing status back to the job where SETIDCOLA is running and an escape message will be issued. is the amount of time from when an operation occurs on the production system until it is successfully sent to the backup system by MIMIX. For example. Internally. Use the DSPDGSTS command to view the Unprocessed entry count for the DB Apply process. The command can be invoked manually or as part of a MIMIX Model Switch Framework custom switching program. specify the data group to check in the following command: SETIDCOLA DGDFN(name system1 system2) ACTION(*CHECKONLY) 2. 1. Message LVI3E26 indicates that no tables were found with identity columns. The next rows added to table A and B will have values of 75+(300*1) = 375 and 30. 2. See “Examples of choosing a value for INCREMENTS” on page 342. since all measurements are approximations or based on historical data. In 15 minutes. From previous experience. as defined in “When the SETIDCOLA command is 343 . Before starting replication in the reverse direction you run the SETIDCOLA command with an INCREMENTS value of 1. You may want to plan for the time required for investigation steps and time to run the command to set values. 3. do the following. You performed an unplanned switch. Message LVE3E2C identifies the number of tables found with identity columns. See “SETIDCOLA command limitations” on page 339. Because replication of all transactions completed before the switch and no users have been allowed on the backup system. Also consider the MIMIX backlog at the time you plan to use the command. the steps you need to perform to set the identity columns of files being replicated by a data group are listed below. you know that the latency of changes being transferred to the backup system is approximately 15 minutes. This suggests an INCREMENTS value of 150. this amount should be adjusted by a factor of at least 100% to 300 to ensure that duplicate identity column values are not generated on the backup system.000 respectively. Scenario 2. See “Checking for replication of tables with identity columns” on page 343. 1. Setting the identity column attribute for replicated files At a high level. respectively. • Checking for replication of tables with identity columns To determine whether any files being replicated by a data group have identity columns.000 + (300*1000)= 330. If the results found tables with identity columns. Rows are inserted into Table A at the highest rate. the backup system has the same values as the production.25 hours). You performed a planned switch for test purposes. Check the job log for the following messages. Run the SETIDCOLA command in check only mode first to determine if you need to set values. Determine what increment value is appropriate for use for all tables replicated by the data group. However. 3. you need to evaluate the tables and determine whether you can use the SETIDCOLA command to set values. Determine whether limitations exist in the replicated tables that would prevent you from running the command to set values.000. approximately 150 rows will have been inserted into Table A (600 rows/hour * 0. 4. From the appropriate system. The next rows added to table A and B will have values of 76 and 31.Handling SQL identity columns • Scenario 1. From the production system. Consider the needs of each table. useful” on page 339 specify a data group and the number of increments to skip in the command: SETIDCOLA DGDFN(name system1 system2) ACTION(*SET) INCREMENTS(number) 344 . The file member is synchronized using copy active file processing. you can specify one or more resolution method to use for each collision point. Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically synchronize file members when an error is detected. you can customize what resolution method to use at each collision point. MIMIX flags file collisions as errors and places the file entry on hold. it is used for all 12 of the collision points. • • Additional methods available with CR classes Automatic synchronization (*AUTOSYNC) and held due to error (*HLDERR) are essentially predefined resolution methods. Normal processing will continue for all other members in the file. that method is used for all 12 of the collision points. unless the collision occurred at the compare attributes collision point.Collision resolution Collision resolution Collision resolution is a function within MIMIX user journal replication that automatically resolves detected collisions without user intervention. you can specify how to handle collision resolution at each of the 12 collision points. If the file entry specifies member *ALL. Additionally. the file is synchronized using save and restore processing. If you specify a named collision resolution class in a data group definition or data group file entry. MIMIX uses the next method specified for that collision point. When held due to error is specified in the data group definition or the data group file entry. MIMIX supports the following choices for collision resolution that you can specify in the file entry options (FEOPT) parameter in either a data group definition or in an individual data group file entry: • Held due to error: (*HLDERR) This is the default value for collision resolution in the data group definition and data group file entries. Collision resolution class: A collision resolution class is a named definition which provides more granular control of collision resolution. This results in the journal entries being replicated to the target system but they are not applied to the target database. Some collision points also provide additional methods of resolution that can only be accessed by using a collision resolution class. If the first method specified does not resolve the problem. a temporary file entry is created for the member in error and only that file entry is held. When automatic synchronization is specified in the data group definition or data group file entry. Any data group file entry for which a collision is detected is placed in a "held due to error" state (*HLDERR). With a defined collision resolution class. *AUTOSYNC and *HLDERR are available for use at each collision point. Within a collision resolution class. You can specify multiple methods of collision resolution to attempt at each collision point. You must take action to apply the changes and return the file entry to an active state. In the latter case. The member is put on hold while the database apply process continues with the next transaction. it is used for all 12 of the collision points. When you specify *HLDERR or *AUTOSYNC in a data group definition or a data group file entry. the following resolution methods are also available: • Exit program: (*EXITPGM) A specified user exit program is called to handle the 345 . the transaction is ignored because the record does not exist. The MXCCUSREXT service program is shipped with MIMIX and runs on the target system. copy the RCD field data into the RUP record. If the RUB does not equal the RUP and the RUB does not equal the RCD. This method is available for all collision points. you must have the following: • The data group definition used for replication must specify a data group type of *ALL or *DB. In the following algorithms. The exit program is called on three occasions. If the RUB equals the RUP and the RUB equals the RCD. This call allows the exit program to handle any initialization or set up you need to perform. If multiple collision resolution methods are specified and do not resolve the problem MIMIX will always use *HLDERR as the last resort. If the RUB equals the RUP and the RUB does not equal the RCD.data collision. 346 . If the RUB does not equal the RUP and the RUB equals the RCD. The first occasion is when the data group is started. the exit program is called when the data group is ended. Requirements for using collision resolution To use a collision resolution other than the default *HLDERR. • Applied: (*APPLIED) This method is only available for the update collision point 3 and the delete collision point 1. • Field merge: (*FLDMRG) This method is only available for the update collision point 3. used with keyed replication. fields from the afterimage are merged with the current image of the file to create a merged record that is written to the file. The MXCCUSREXT service program (and your exit program) is called if a collision occurs at a collision point for which you have indicated that an exit program should perform collision resolution actions. The MXCCUSREXT service program dynamically links your exit program. fail the field-level merge. d. placing the file on hold. Each field within the record is checked using the series of algorithms below. these abbreviations are used: RUB = before-image of the source file RUP = after-image of the source file RCD = current record image of the target file a. For delete collision point 1. do not change the RUP field data. c. b. Finally. If certain rules are met. the transaction is ignored if the record to be updated already equals the data in the updated record. For update collision point 3. do not change the RUP field data. • If you plan to use an exit program for collision resolution. specify the value for the type of collision resolution processing you want to use. specify the value for the type of collision resolution processing you want to use. you must first create a named collision resolution class. select option 5 (Work with collision resolution classes) and press Enter. 3. specify the name and library of program to use at the Exit point prompt. 5. – If you want to implement collision resolution for only specific files. Class (CRTCRCLS) display appears. specify a value in the parameter within the data group definition. do the following: 1.Collision resolution • You must specify either *AUTOSYNC or the name of a collision resolution class for the Collision resolution element of the File entry option (FEOPT) parameter. specify *EXITPGM for each of the collision points that you want to be handled by the exit program and specify the name of the exit program. The Work with CR Classes display appears. Note: You can specify more than one method of collision resolution for each prompt by typing a + (plus sign) at the prompt. In the collision resolution class. From the MIMIX Main Menu. the member is placed on hold due to error. *HLDERR is always the last method attempted. Note: Ensure that data group activity is ended before you change a data group definition or a data group file entry. 2. 2. Working with collision resolution classes Do the following to access options for working with collision resolution: 1. From the MIMIX Configuration Menu. At each of the collision point prompts on the display. Specify the value as follows: – If you want to implement collision resolution for all files processed by a data group. Press F1 (Help) to see a description of the collision point. With the exception of the *HLDERR method. 4. select option 11 (Configuration menu) and press Enter. the methods are attempted in the order you specify. At each of the collision point prompts on the second display. type a 1 (Create) next to the blank line at the top of the display and press Enter. If you specified *EXITPGM at any of the collision point prompts. If all other methods fail. Press Page Down to see additional prompts. Creating a collision resolution class To create a collision resolution class. 6. specify a value in the parameter within an individual data group file entry. If the first method you specify does not successfully resolve the collision. then the next method is run. 347 . Specify a name at the Collision resolution class prompt. From the Work with CR Classes display. The Create Collision Res. Verify that the collision resolution class shown on the display is what you want to delete. specify the number of times to try to automatically synchronize a file. Press Page Down to see all of the values. Note: If a file encounters repeated failures. 3. You can specify as many as 3 values for each collision point prompt. Make any changes you need. A confirmation display appears. do the following: 1.7. press Enter. Changing a collision resolution class To change an existing collision resolution class. Page Down to see all of the prompts. Press Enter. 3. If this number is exceeded in the time specified in the Retry time limit. To accept the changes. Deleting a collision resolution class To delete a collision resolution class. From the Work with CR Classes display. Provide the required values in the appropriate fields. 2. type a plus sign (+) in the entry field opposite the phrase "+ for more" and press Enter. Inspect the default values shown on the display and either accept the defaults or change the value. To create the collision resolution class. the file will be placed on hold due to error 8. do the following: 1. 2. Allowing excessive synchronization requests can cause communications bandwidth degradation and negatively impact communications performance. 4. specify the number of maximum number of hours to retry a process if a failure occurs due to a locking condition or an in-use condition. The Display CR Class Details display appears. To expand this field for multiple entries. At the Retry time limit prompt. an error condition that requires manual intervention is likely to exist. type a 4 (Delete) next to the collision resolution class you want and press Enter. At the Number of retry attempts prompt. The Change CR Class Details display appears. 348 . From the Work with CR Classes display. do the following: 1. 2. From the Work with CR Classes display. 5. type a 2 (Change) next to the collision resolution class you want and press Enter. 9. type a 5 (Display) next to the collision resolution class you want and press Enter. press Enter. Displaying a collision resolution class To display a collision resolution class. 2. 1. From the Work with CR Classes display.Collision resolution Printing a collision resolution class Use this procedure to create a spooled file of a collision resolution class which you can print. type a 6 (Print) next to the collision resolution class you want and press Enter. 349 . A spooled file is created with the name MXCRCLS on which you can use your standard printing procedure. The access type field within the T-ZC journal entry indicates what type of change operation occurred. PF38-SRC. Table 43. the T-ZC is a member operation. PF-SRC. Clear Initialize Open Reorganize Remove Rename Add constraint Change constraint Remove constraint These T-ZC journal entries may or may not have a member name associated with them. and LF-38 file types. Change Logical File (CHGLF). These T-ZC journal entries are eligible for replication through the system journal. the T-ZC is assumed to be a file operation. PF38-DTA. Change Object Description (CHGOBJD) X X X X X X X X X Clear member for physical files (CLRPFM) Initialize member for physical files (INZPFM) Opening member for write for physical files Reorganize member for physical files (RGZPFM) Remove member for physical files and logical files (RMVM) Rename member for physical files and logical files (RNMM) Adding constraint for physical files (ADDPFCST) Changing constraint for physical files (CHGPFCST) Removing constraint for physical files (RMVPFCST) Operations that Generate T-ZC Access Type 10 25 30 36 37 38 62 63 64 1. If no member name is associated with the journal entry. Table 43 lists the T-ZC journal entry access types that are generated by PF-DTA. While 350 . If a member name is associated with the journal entry. MIMIX replicates file attributes and file member data for all T-ZC entries generated for logical and physical files configured for system journal replication. Default T-ZC processing: Files that have an object auditing value of *CHANGE or *ALL will generate T-ZC journal entries whenever changes to the object attributes or contents occur. Access Type 1 7 T-ZC journal entry access types generated by file objects. MIMIX provides the ability to prevent replication of predetermined sets of TZC journal entries associated with changes to object attributes or content changes.Omitting T-ZC content from system journal replication For logical and physical files configured for replication solely through the system journal. By default. Change Logical File Member (CHGLFM). LF. Change Physical File Member (CHGPFM). Access Type Description Add Change1 X Operation Type File Member X X Data Add member for physical files and logical files (ADDPFM) Change Physical File (CHGPF). The file must have an object auditing value of *CHANGE or *ALL in order for any T-ZC journal entry resulting from a change operation to be created in the system journal. If COOPDB is *YES. The OMTDTA parameter can also help you reduce the number of transactions that require substantial processing time to replicate.Omitting T-ZC content from system journal replication MIMIX recreates attribute changes on the target system. send. 351 . Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data group object entry commands. especially in environments where the replication of file data transactions is not necessary. and data operations in transactions for the access types listed in Table 43 are replicated. This can cause unnecessary replication of data and can impact processing time. it may be desirable to replicate the file layout but not the file members or data. then COOPTYPE cannot specify *FILE.Member and data operations are omitted from replication. The OMTDTA parameter is useful when a file or member’s data does not need to be the replicated. Only file operations in transactions with access type 7 (Change) are replicated. – Omit content (OMTDTA) must be either *FILE or *MBR. *MBR . T-ZC journal entries with access types within the specified set are omitted from processing by MIMIX. File and member operations in transactions for the access types listed in Table 43 are replicated. and restore processes. such as T-ZC journal entries with access type 30 (Open). Each of the following values for the OMTDTA parameter define a set of access types that can be omitted from replication: *NONE . member additions and data changes require MIMIX to replicate the entire object using save. The file for which you want to omit transactions must be identified by a data group object entry that specifies the following: – Cooperate with database (COOPDB) must be *NO when Cooperating object types (COOPTYPE) specifies *FILE. Configuration requirements and considerations for omitting T-ZC content To omit transactions. This is the default value. logical and physical files must be configured for system journal replication and meet these configuration requirements: • • The data group definition must specify *ALL or *OBJ for the Data group type (TYPE). when replicating work files and temporary files. member.Data operations are omitted from replication. All file. *FILE . the data group object entry should also specify *CHANGE or *ALL for the Object auditing value (OBJAUD) parameter. Access type 7 (Change) for both file and member operations are replicated. you can specify a predetermined set of access types for *FILE objects to be omitted from system journal replication.No T-ZCs are omitted from replication. To ensure that changes to the file continue to be journaled and replicated. Object auditing value considerations . Only file operations in transactions for the access types listed in Table 43 are replicated. For example. and 64 (Remove constraint). 62 (Add constraint). 63 (Change constraint). When using MIMIX Dynamic Apply for cooperative processing.When MIMIX evaluates a system journal entry and finds a possible match to a data group object entry which specifies an attribute in its Attribute (OBJATR) parameter. This may affect whether replicated files on the source and target systems are identical. any changes to the file will no longer generate T-ZC entries in the system journal. the file on the target system reflects the original state of the file on the source system when it was retrieved for replication. When using legacy cooperative processing. while most other transactions are replicated by user journal replication. However. it is not retrieved. If the configured value specified for the OBJAUD parameter is higher than the object’s actual value. If the object attribute is not needed to determine the most specific match to a data group object entry. After determining which data group object entry has the most specific match. File attribute transactions are T-ZC journal entries with access types 7 (Change). some T-ZC journal entries for file objects are omitted from system journal replication. see “Managing object auditing” on page 55. recall how a file with an object auditing attribute value of *NONE is processed. the performance of the object send job may improve. system journal replication processes select only file attribute transactions. As a result. Omit content (OMTDTA) and cooperative processing The OMTDTA and COOPDB parameters are mutually exclusive. Legacy cooperative processing replicates only physical data files. If you use the SETDGAUD command to force the object to have an auditing level of *NONE and the data group object entry also specifies *NONE. when OMTDTA is enabled by specifying *FILE or *MBR. According to the configuration 352 .For all library-based objects. When the matching object entry specifies *FILE or *MBR for OMTDTA. Object attribute considerations . For more information about object auditing. Omit content (OMTDTA) and comparison commands All T-ZC journal entries for files are replicated when *NONE is specified for the OMTDTA parameter. MIMIX does not need to consider the object attribute in any other evaluations. These transactions are replicated by system journal replication during legacy cooperative processing. MIMIX will change the object to use the higher value. MIMIX evaluates that entry to determine how to proceed with the journal entry. logical files and physical files (source and data) are replicated primarily through the user journal. However. any subsequent changes to file data are not replicated to the target system. MIMIX allows only a value of *NONE for OMTDTA when a data group object entry specifies cooperative processing of files with COOPDB(*YES) and COOPTYPE(*FILE). For example. After MIMIX replicates the initial creation of the file through the system journal. MIMIX must retrieve the attribute from the object in order to determine which object entry is the most specific match. MIMIX evaluates the object auditing level when starting data a group after a configuration change. files that are configured to omit data will report those omitted attributes as *EC (equal configuration). Running a comparison command without specifying a data group will report all the synchronized-but-not-identical attributes as *NE (not equal) because no configuration information is considered. if *MBR is specified for OMTDTA. When CMPFILDTA is run without specifying a data group. As a result. The file is not identical between source and target systems. which call comparison commands with a data group specified.Omitting T-ZC content from system journal replication information. the synchronized-but-not-identical attributes are reported as *NE (not equal). For example. When CMPFILA is run without specifying a data group. Comparison commands will report these attributes as *EC (equal configuration) even though member data is different. the files are synchronized between source and target systems. the file and member attributes are replicated to the target system but the member data is not. The Compare Object Attributes (CMPOBJA) command can be used to check for the existence of a file on both systems and to compare its basic attributes (those which are common to all object types). when a data group is specified on the command. but the files are not the same. but it is synchronized according to configuration. Consider how the following comparison commands behave when faced with nonidentical files that are synchronized according to the configuration. This command never compares filespecific attributes or member attributes and should not be used to determine whether a file is synchronized. MIMIX audits. the synchronized-but-notidentical file member attributes are reported as *NE (not equal). • The Compare File Attributes (CMPFILA) command has access to configuration information from data group object entries for files configured for system journal replication. The Compare File Data (CMPFILDTA) command uses data group file entries for configuration information. will have the same results. • • 353 . When a data group is specified on the command. any file objects configured for OMTDTA will not be compared. A similar situation can occur when OMTDTA is used to prevent replication of predetermined types of changes. Object retrieval delay considerations and examples You should use care when choosing the object retrieval delay. contention for an object between MIMIX and your applications if the object retrieval processing is delayed for a predetermined amount of time before obtaining a lock on the object to retrieve it for replication. You can reduce. but small enough to allow MIMIX to maintain a suitable high availability environment. If the object retrieval latency time is greater than the configured delay value. Too short a delay may allow MIMIX to retrieve an object before an application is finished with it.The object retrieval delay value is configured to be 3 seconds: • • Object A is created or changed at 9:05:10. A long delay may impact the ability of system journal replication processes to move data from a system in a timely manner. MIMIX will obtain a lock on the object that can prevent your applications from accessing the object in a timely manner. particularly documents (*DOC) and stream files (*STMF). You can specify a delay time from 0 through 999 seconds. or eliminate. MIMIX will not delay and will continue with the object retrieval processing. Example 2 . It retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 9:05:10 + configured delay value of :03 = 9:05:13) is less than the current date/time (9:05:14).Selecting an object retrieval delay When replicating objects. then MIMIX will delay its object retrieval processing until the difference between the time the object was last changed and the current time exceeds the configured delay value. The Object Retrieve job encounters the create/change journal entry at 9:05:14. 354 . You can use the Object retrieval delay element within the Object processing parameter on the change or create data group definition commands to set the delay time between the time the object was last changed on the source system and the time MIMIX attempts to retrieve the object on the source system. Some of your applications may be unable to recover from this condition and may fail in an unexpected manner. the object retrieve job continues normal processing and attempts to package the object. The default is 0. Because the object retrieval delay time has already been exceeded. You should make the value large enough to reduce or eliminate contention between MIMIX and applications. Example 1 . you can override the data group value at the object level by specifying an Object retrieval delay value on the commands for creating or changing data group entries. Although you can specify this value at the data group level.The object retrieval delay value is configured to be 2 seconds: • Object A is created or changed at 10:45:51. If the object retrieval latency time (the difference between when the object was last changed and the current time) is less than the configured delay value. the object retrieve job continues with normal processing and attempts to package the object. Because the object retrieval delay value has been met. It retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 13:20:26 + configured delay value of :04 = 13:20:30) exceeds the current date/time (13:20:27) and delays for 3 seconds to satisfy the configured delay value. the object retrieve job continues with normal processing and attempts to package the object.The object retrieval delay value is configured to be 4 seconds: • • Object A is created or changed at 13:20:26. Because the object retrieval delay value has now been met. The Object Retrieve job encounters the create/change journal entry at 13:20:27. It retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) exceeds the current date/time (10:45:52). While the object retrieve job is waiting to satisfy the configured delay value. • Example 3 . the object retrieve job delays for 1 second to satisfy the configured delay value. the Object Retrieve job again retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 10:45:51 + configured delay value of :02 = 10:45:53) is equal to the current date/time (10:45:53). the Object Retrieve job again retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval delay value has not be met or exceeded. After the delay (at time 13:20:32). After the delay (at time 10:45:53). • • • 355 . the object is changed again at 13:20:28. the Object Retrieve job again retrieves the “last change date/time” attribute from the object and determines that the delay time (object last changed date/time of 13:20:28 + configured delay value of :04 = 13:20:32) again exceeds the current date/time (13:20:30) and delays for 2 seconds to satisfy the configured delay value. After the delay (at time 13:20:30).Selecting an object retrieval delay • The Object Retrieve job encounters the create/change journal entry at 10:45:52. MIMIX replicates these operations correctly. such as CREATE PROCEDURE (create). 356 . For example. To correctly replicate a create operation. The program object usually shares the name of the procedure and resides in the same library with which the procedure is associated. A DROP PROCEDURE statement for an SQL procedure removes the procedure from the catalog and deletes the external program object. Requirements for replicating SQL stored procedure operations The following configuration requirements and restrictions must be met: • Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they pertain to your environment. MIMIX can replicate operations related to stored procedures that are written in SQL (SQL stored procedures). An appropriately configured data group object entry must identify the object to which the stored procedure is associated. and REVOKE PRIVILEGES ON PROCEDURE (authority). it cannot be seen by looking at the library. SQL stored procedures are defined entirely in SQL and may contain SQL control statements.Configuring to replicate SQL stored procedures and user-defined functions DB2 UDB for IBM PowerTM Systems supports external stored procedures and SQL stored procedures. The COMMENT statement cannot be replicated. Log in to Support Central and refer the Technical Documents page for a list of required and recommended IBM PTFs. This information is specifically for replicating SQL stored procedures and user-defined functions. the following statements create program SQLPROC in LIBX and establish it as a stored procedure associated with LIBX: CREATE PROCEDURE LIBX/SQLPROC(OUT NUM INT) LANGUAGE SQL SELECT COUNT(*) INTO NUM FROM FILEX For SQL stored procedures. Because information about the procedure is stored in the database catalog and not the library. GRANT PRIVILEGES ON PROCEDURE (authority). DROP PROCEDURE (delete). An SQL procedure is a program created and linked to the database as the result of a CREATE PROCEDURE statement that specifies the language SQL and is called using the SQL CALL statement. GRANT and REVOKE only affect the associated program object. Use System i Navigator to view the stored procedures associated with a particular library (select Databases > Libraries). an independent program object is created by the system and contains the code for the procedure. name mapping cannot be used for either the library or program name. Procedures are associated with a particular library. • • • • Stored procedures or other system table concepts that have non-deterministic ties to a library-based object cannot be replicated. See “Requirements for replicating SQL stored procedure operations” on page 356. For example: ADDDGOBJE DGDFN(name system1 system2) LIB1(library) OBJ1(*ALL) OBJTYPE(*PGM) 357 .Configuring to replicate SQL stored procedures and user-defined functions To replicate SQL stored procedure operations Do the following: 1. 2. Ensure that the replication requirements for the various operations are followed. Ensure that you have a data group object entry that includes the associated program object. and 3 attempts using the second delay value of 300 seconds). you can specify to have MIMIX attempt saves of DLOs and IFS objects using save-while-active. save-while-active capabilities will be used unless it is disabled. save-while-active is only used when saving *FILE objects. By default. attempts to save any type of object will use a normal save. In addition to providing the ability to enable the use of save-while-active for object types other than *FILE. This assumes that the file being saved does not have uncommitted transactions. MIMIX uses save-while-active with a wait time of 120 seconds on the initial save attempt. If the object cannot be saved after the attempts specified in RTYNBR. After the initial failed save attempt. However. When specifying to use save-while-active. the initial save of the object may fail. or and IFS object. For more information on retry processing. You can configure the save-while-active wait time when specifying to use save-whileactive for the initial save attempt of a *FILE. MIMIX then uses normal (non save-while-active) processing on all subsequent save attempts if the initial save attempt fails. MIMIX delays for the number of seconds specified in the RTYDLYITV1 value. Save-while-active wait time For the default (*FILE objects). This is repeated for the number of times that is specified for the RTYNBR value in the data group definition. If there is conflict for the use of an object between MIMIX and some other process. before retrying the save operation. 3 attempts using the first delay value of 5 seconds. Values for retry processing are specified in the First retry delay interval (RTYDLYITV1) and Number of times to retry (RTYNBR) parameters in the data group definition. If you disable save-while-active. it is not used when saving other library-based object types. The save is then attempted for the number of retries specified in the RTYNBR parameter. or IFS objects.Using Save-While-Active in MIMIX MIMIX system journal replication processes use save/restore when replicating most types of objects. the first attempt to save the object after delaying the amount of time configured for the Second retry delay interval (RTYDLYITV2) 358 . a DLO. this calculates to be 7 save attempts (1 initial attempt. MIMIX will attempt to process the object by automatically starting delay or retry processing using the values configured in the data group definition. then the file may be successfully saved by using a normal (non savewhile-active) save. When such a failure occurs. For the initial save of *FILE objects. Considerations for save-while-active If a file is being saved and it shares a journal with another file that has uncommitted transactions. then MIMIX uses the delay interval value which is specified in the RTYDLYITV2 parameter. MIMIX provides the abilities to control the wait time when using save-while-active or to disable the use of save-while-active for all object types. For the initial default values for a data group. in a time frame of approximately 20 minutes. DLOs. see the parameters for automatic retry processing in “Tips for data group parameters” on page 210. SYS2. 359 . DGSWAT: Save-while-active type. DLOs and IFS objects.Modifying: If you want to modify a data group definition to enable use of save-while-active with a wait time of 30 seconds for files. Normal saves will always be used to save any type of object. such as when passed to the SAVACTWAIT parameter on the SAVOBJ and SAVDLO commands. DGSWAT FROM MIMIX/DM0200P WHERE DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’ Example . you could use the following statement: UPDATE MIMIX/DM0200P SET DGSWAT=-1 WHERE DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’ Example . • • Example configurations The following examples describe the SQL statements that could be used to view or set the configuration settings for a data group definition (data group name. All other attempts to save the object will use a normal save. For DLOs and IFS objects. system 1 name. which can be configured using the DLO transmission method under Object processing in the data group definition. the network system will be automatically updated by MIMIX. you could use the following statement: UPDATE MIMIX/DM0200P SET DGSWAT=30 WHERE DGDGN=’MYDGDFN’ AND DGSYS=’SYS1’ AND DGSYS2=’SYS2’ Note: You only have to make this change on the management system. A value of 1 through 99999 indicates that save-while-active is to be used when saving files. Types of save-while-active options MIMIX uses the configuration value (DGSWAT) to select the type of save-while-active option to be used when saving objects. You can specify the following values: • A value of 0 (the default) indicates that save-while-active is to be used when saving files. DGSYS. DLOs and IFS objects.Using Save-While-Active in MIMIX value will also use save-while-active. DLOs or IFS objects. SYS1. with a save-while-active wait time of 120 seconds.Disabling: If you want to modify the values for a data group definition to disable use of save-while-active for a data group and use a normal save. The value specified will be used as the savewhile-active wait time. You can view and change these configuration values for a data group through an interface such as SQL or DFU.Viewing: Use this SQL statement to view the values for the data group definition: SELECT DGDGN. it is recommended that DLOs be replicated using optimized techniques. a normal save will be attempted. system 2 name) of MYDGDFN. Example . DGSYS2. A value of -1 indicates that save-while-active is disabled and is not to be used when saving files. Note: Although MIMIX has the capability to replicate DLOs using save/restore techniques. which provide underlying support for MIMIX AutoGuard. “Report types and output formats” on page 378 describes the output of compare commands: spooled files and output files (outfiles). 360 . use an enhanced set of common parameters and a common processing methodology that is collectively referred to as ‘object selection. and subtree rules. or a combination. object selection parameter. • • • Object selection process It is important to be able to predict the manner in which object selection interacts with your input from a command so that the objects you expect are selected for processing. Object selection supports four classes of objects: files. and DLOs. object selection parameter. granular capability for selecting objects by data group. IFS objects. object order precedence.’ Object selection provides powerful. objects. The following commands use the MIMIX object selection capability: • • • • • • • • • Compare File Attributes (CMPFILA) Compare Object Attributes (CMPOBJA) Compare IFS Attributes (CMPIFSA) Compare DLO Attributes (CMPDLOA) Compare File Data (CMPFILDTA) Compare Record Count (CMPRCDCNT) Synchronize Object (SYNCOBJ) Synchronize IFS Object (SYNCIFS) Synchronize DLO (SYNCDLO) The topics in this chapter include: • “Object selection process” on page 360 describes object selection which interacts with your input from a command so that the objects you expect are selected for processing. The object selection capability provides you with the option to select objects by data group. “Parameters for specifying object selectors” on page 363 describes object selectors and elements which allow you to work with classes of objects “Object selection examples” on page 368 provides examples and graphics with detailed information about object selection processing. or a combination.Object selection for Compare and Synchronize commands CHAPTER 17 Object selection for Compare and Synchronize commands Many of the Compare and Synchronize commands. Object selection process The object selection process takes a candidate group of objects. and produces a list of objects to be processed. Object selection process flow Candidate objects are those objects eligible for selection. candidate objects consist of all objects on the 361 . They are input to the object selection process. Figure 24 illustrates the process flow for object selection. Figure 24. Initially. subsets them as defined by a list of object selectors. In that case. IFS object. Order precedence Object selectors are always processed in a well-defined sequence. The data group entries subset the list of candidate objects for the class to only those objects that are eligible for replication by the data group. The first major selection step is optional and is performed only if a data group definition is entered on the command. data group entries are the source for object selectors. Depending on what is specified on the command. Data group entries represent one of four classes of objects: files. the possible object selectors are 1 to N. The selection parameter is separate and distinct from the data group configuration entries. The remaining candidate objects make up the resultant list of objects to be processed. or both. the set of candidate objects may be narrowed down to objects of a particular class (such as IFS objects). The second major object selection step subsets the candidate objects based on Object selectors from the command’s object selector parameter (file. Object selectors can come from the configuration information for a specified data group. and name mapping for dual-system and single-system environments. The second step then uses the input specified on the object selection parameter to further subset the objects selected by the data group entries. an indicator of whether the objects should be included in or omitted from processing. That intermediate set is input to the second major selection step. or DLO). where N is defined by the number of data group entries. If none are specified. which serve as filters on the object selector. Only those entries that correspond to the class associated with the command are used. object. IFS objects. the object selection parameter can be used independently to select from all objects on the system. Elements provide information about the object such as its name. library-based objects.Object selection for Compare and Synchronize commands system. An object selector identifies an object or group of objects. which is important when an object matches more than one selector. Each object selector consists of multiple object selector elements. If the command specifies a data group and items on the object selection parameter. Based on the command. Note: A single object selector can select multiple objects through the use of generic names and special values such as *ALL. from items specified in the object selector parameter. If no data group is specified on the data group definition parameter. The values specified on the command determine the object selectors used to further refine the list of candidate objects in the class. the data group entries are processed first to determine an intermediate set of candidate objects that are eligible for replication by the data group. one or both steps may occur. 362 . Up to 300 object selectors may be specified on the parameter. See Table 44 for a list of object selector elements by object class. the default is to select all candidate objects. The object selector elements vary by object class. If a data group is specified. MIMIX processing for object selection consists of two distinct steps. so the resulting object list can easily exceed the limit of 300 object selectors that can be entered on a command. and DLOs. Library-based selection allows you to work with files or objects based on object name. or object attribute. Directory-based selection allows you to work with objects based on a IFS object path name and includes a subtree option that determines the scope of directory-based objects to include.Parameters for specifying object selectors Selectors from a data group follow data group rules and are processed in most.to least-specific order. directory-based. the specific object selector elements that you can specify on the command is determined by the class of object. A generic name is a character string that contains one or more characters followed by an asterisk (*). and may describe name mapping for those objects. then the subsequent elements are checked according to their priority. object type. For all classes of objects. Selectors from the object selection parameter are always processed last to first. including 363 . for example. Folder-based selection allows you to work with objects based on DLO path name. In an IFS-based command. you can specify as many as 300 object selectors. all candidate objects that match the generic name are selected. see the following topics: • • • • “How MIMIX uses object entries to evaluate journal entries for replication” on page 92 “Identifying IFS objects for replication” on page 106 “How MIMIX uses DLO entries to evaluate journal entries for replication” on page 111 “Processing variations for common operations” on page 117 Parameters for specifying object selectors The object selectors and elements allow you to work with classes of objects. Object selector elements provide three functions: • Object identification elements define the selected object by name. “Object selection examples” on page 368 illustrates the precedence of object selection. The most specific element is checked for a match first. library name. When a generic name is specified. Object selection supports generic object name values for all object classes. These objects can be library-based. An object selector consists of several elements that identify an object or group of objects. As a general rule when specifying items on an object selection parameter. If a candidate object matches more than one object selector. For each object selector. detailed information about order precedence and priority of elements. member name. For additional. However. first specify selectors that have a broad scope and then gradually narrow the scope in subsequent selectors. include /A/B* and then omit /A/B1. Folder-based selection also includes a subtree object selector. the last matching selector in the list is used. depending on the class of objects with which a particular command works. or folder-based. the elements are checked according to a priority defined for the object class. The elements vary. indicates if those objects should be included in or omitted from processing. These elements allow you to choose a specific name. a value of *ALL results in the selection of files and objects defined to that data group by the respective data group file entries or data group object entries. Name mapping elements are required primarily for environments where objects exist in different libraries or paths. When a data group is also specified on the command. • • • Filtering elements provide additional filtering capability for candidate objects. Class Commands: Object selection parameters and parameter elements by class File CMPFILA. only the objects that reside within the given library are selected.Object selection for Compare and Synchronize commands generic name specifications. SYNCDLO DLO Path Subtree Name Pattern Type Owner Include/Omit System 2 path System 2 name pattern Parameter: Identification elements: Filtering elements: Processing elements: Name mapping elements: 1. CMPRCDCNT1 FILE File Library Member Attribute1 Include/Omit System 2 file1 System 2 library1 Library-based object CMPOBJA. CMPFILDTA. When no data group is specified on the command. or the special value *ALL. Using a generic name. Library name element: The library name element specifies the name of the library that contains the files or objects to be included or omitted from the resultant list of 364 . for example. To process all files within the related selection criteria. File name and object name elements: The File name and Object name elements allow you to identify a file or object by name. Table 44 lists object selection elements by function and identifies which elements are available on the commands. you would specify A* for the object name. Include or omit elements identify whether the object should be processed or explicitly excluded from processing. If you want to work with all objects beginning with the letter A. a generic name. you can select a group of files or objects based on a common character string. SYNCOBJ OBJ Object Library Type Attribute Include/Omit System 2 object System 2 library IFS CMPIFSA. Table 44. SYNCIFS OBJ Path Subtree Name Pattern Type Include/Omit System 2 path System 2 name pattern DLO CMPDLOA. select *ALL for the file or object name. The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping. specifying *ALL and a library name. a generic name. Directory or folder hierarchy 365 . Refer to the individual commands for detailed information on member processing. The Member element can be a specific name. this element allows you to define a library a specific name. all descendants of the named objects are also selected. or the special value *ALL. Figure 25 illustrates the hierarchical structure of folders and directories prior to processing. which combines the folder path and the DLO name. Object path name (IFS) and DLO path name elements: The Object path name (IFS) and DLO path name elements identify an object or DLO by path name. and is used as the basis for the path. or the special value *ALL. By default. If you specify a data group. only those objects defined to that data group by the respective data group IFS entries or data group DLO entries are selected. or the special value *ALL. Traditionally. Directory subtree and folder subtree elements: The Directory subtree and Folder subtree elements allow you to expand the scope of selected objects and include the descendants of objects identified by the given object or DLO path name. DLOs are identified by a folder path and a DLO name. and subtree examples shown later in this document. if *ALL is used. a generic path. They allow a specific path. a generic name. Figure 25. However. Like the file or object name. the subtree element is *NONE. Note: The library value *ALL is supported only when a data group is specified. Object selection uses an element called DLO path. pattern. see the graphics and examples beginning with “Example subtree” on page 371. the Member element provides a means to select specific members. Member element: For commands that support the ability to work with file members. and only the named objects are selected. For more information.Parameters for specifying object selectors objects. Table 45. the object selector will identify no objects. Table 46. Keep in mind that improper use of the Name pattern element can have undesirable results. Let us assume you specified a path name of /corporate. a subtree of *NONE. IFS objects. DLO type *ALL *DOC *FLR Supported DLO types for CMPDLOA and SYNCDLO Description All documents and folders are selected Documents Folders 366 . Object type element: The Object type element provides the ability to filter objects based on an object type. see the “Example Name pattern” on page 375. only the objects in the file system specified will be included. or the special value *ALL. For more information. however you will need to specify if you want to include objects from other file systems. Since the path name. Object selection will not cross file system boundaries when processing subtrees with IFS objects. only those candidate objects with names beginning with $ that reside in the named DLO path or IFS object path are selected. If you specify a pattern of $*. The list of allowable values varies by object class. see “Supported object types for system journal replication” on page 505. The object type is valid for library-based objects. /corporate. Objects from other file systems do not need to be explicitly excluded. a generic name. see the graphic and examples beginning with “Example subtree for IFS objects” on page 376. or DLOs. The Name pattern element can be a specific name. and symbolic links are selected Directories Stream files Symbolic links Object type *ALL *DIR *STMF *SYMLNK Supported object types for CMPDLOA and SYNCDLO are listed in Table 46. Supported object types for CMPIFSA and SYNCIFS Description All directories. Thus. Supported object types for CMPIFSA and SYNCIFS are listed in Table 45. and pattern of $*. For a list of replicated object types. Name pattern element: The Name pattern element provides a filter on the last component of the object path name. When you specify *ALL. only those object types which MIMIX supports for replication are included. For more information. the Name pattern element is generally most useful when subtree is *ALL.Object selection for Compare and Synchronize commands Directory subtree elements for IFS objects: When selecting IFS objects. does not match the pattern of $*. stream files. and can be a specific value or *ALL. for example. SAVF. and program attributes include CLP and RPG. see the individual commands. LF. This element may be a specific name or the special value *FILE1 for files or *OBJ1 for objects. Generic values are not supported for the system 2 value if a generic value was specified on the Library object selector. These values indicate that the name of 367 . file attributes include PF. Although any value can be entered on the Object attribute element. The attribute can be a specific value. Name mapping is useful when working with multiple sets of files or objects in a dual-system or single-system environment. Omitted entries are not added to the list and are excluded from further processing. Owner element: The Owner element allows you to filter DLOs based on DLO owner. This specification indicates that the name of the library on system 2 is the same as on system 1 and that no name mapping occurs. or *ALL. System 2 object path name and system 2 DLO path name elements: The System 2 object path name and System 2 DLO path name elements support name mapping for the path specified in the Object path name or DLO path name element. Object attribute element: The Object attribute element provides the ability to filter based on extended object attribute. Name mapping is useful when working with two sets of IFS objects or DLOs in different paths in either a dual-system or single-system environment. If the System 2 library element is not a specific name. Instead. Included entries are added to the resultant list and become candidate objects for further processing. If the File or Object element is not a specific name. Refer to the individual commands for the list of supported attributes. This element may be a specific name or the special value *LIB1. Only candidate DLOs owned by the designated user profile are selected.Parameters for specifying object selectors For unique object types supported by a specific command. a generic value. Generic values are not supported for the system 2 value if you specified a generic value for the IFS Object or DLO element. then you must use the default value of *LIB1. System 2 file and system 2 object elements: The System 2 file and System 2 object elements provide support for name mapping. For example. you must choose the default values of *OBJ1 for IFS objects or *DLO1 for DLOs. The Owner element can be a specific name or the special value *ALL. This specification indicates that the name of the file or object on system 2 is the same as on system 1 and that no name mapping occurs. a list of supported attributes is available on the command. providing name mapping between files or objects in different libraries. Generic values are not supported for the system 2 value if a generic value was specified on the File or Object parameter. System 2 library element: The System 2 library element allows you to specify a system 2 library name that differs from the system 1 library name. Include or omit element: The Include or omit element determines if candidate objects or included in or omitted from the resultant list of objects to be processed by the command. then you must use the default value of *FILE1 or *OBJ1. and DSPF. The default provides support for a two-system environment without name mapping. Object selection examples In this section.Object selection for Compare and Synchronize commands the file or object on system 2 is the same as that value on system 1. For simplicity. Table 48. Table 47. The System 2 name pattern element may be a specific name or the special value *PATTERN1. all candidate objects in this example are defined to library LIBX. examples and graphics provide you with detailed information about object selection processing. then you must use the default value of *PATTERN1. Processing example with a data group and an object selection parameter Using the CMPOBJA command. If the Object path name or DLO path name element is not a specific name. Generic values are not supported for the System 2 name pattern element if you specified a generic value for the Name pattern element. let us assume you want to compare the objects defined to data group DG1. and subtree rules. Objects are evaluated against data group entries in the same order of precedence used by replication processes. Object selectors from data group entries for data group DG1 Object A* ABC* Library LIBX LIBX Object type *ALL *FILE Include or omit *INCLUDE *OMIT Order Processed 3 2 368 . Table 48 represents the object selectors based on the data group object entry configuration for data group DG1. System 2 name pattern element: The System 2 name pattern provides support for name mapping for the descendents of the path specified for the Object path name or DLO path name element. This specification indicates that no name mapping occurs. These illustrations show how objects are selected based on specific selection criteria. Object ABC AB A DEF DE D Candidate objects on system Library LIBX LIBX LIBX LIBX LIBX LIBX Object type *FILE *SBSD *OUTQ *PGM *DTAARA *CMD Next. Table 47 lists all candidate objects on your system . object order precedence. Table 51. Perhaps you now want to include or omit specific objects from the filtered candidate objects listed in Table 49. Table 50 shows the object selectors to be processed based on the values specified on the object selection parameter. 369 . Object selectors for CMPOBJA object selection parameter Object *ALL *ALL *ALL Library LIBX LIBX LIBX Object type *OUTQ *SBSD *JOBQ Include or omit *INCLUDE *INCLUDE *OMIT Order Processed 1 2 3 The objects compared by the CMPOBJA command are shown in Table 51. These commands are required to identify or report candidate objects that do not exist. Table 50. Object A AB DEF Objects selected by data group DG1 Library LIBX LIBX LIBX Object type *OUTQ *SBSD *JOBQ Note: Although job queue DEF in library LIBX did not appear in Table 47. it would be added to the list of candidate objects when you specify a data group for some commands that support object selection. Object selectors from data group entries for data group DG1 Object DEF Library LIBX Object type *JOBQ Include or omit *INCLUDE Order Processed 1 The object selectors from the data group subset the candidate object list. This list is internal to MIMIX and not visible to users. These object selectors serve as an additional filter on the candidate objects.Object selection examples Table 48. Object A AB Resultant list of objects to be processed Library LIBX LIBX Object type *OUTQ *SBSD In this example. No data group is specified. Table 49. These are the result of the candidate objects selected by the data group (Table 49) that were subsequently filtered by the object selectors specified for the Object parameter on the CMPOBJA command (Table 50). The input source is a selection parameter. resulting in the list of objects defined to the data group shown in Table 49. the CMPOBJA command is used to compare a set of objects. Thus. Specific entries should be specified last. such as A*. The sequence column identifies the order in which object selectors were entered. The object selectors serve as filters to the candidate objects listed in Table 52. Sequence Entered 1 2 3 4 5 Object selectors entered on CMPOBJA selection parameter Object A* D* ABC* *ALL DEFG Library LIBX LIBX LIBX LIBX LIBX Object type *ALL *ALL *ALL *PGM *PGM Include or omit *INCLUDE *INCLUDE *OMIT *OMIT *INCLUDE Table 54 illustrates how the candidate objects are selected. Table 53. Table 54. Table 52 lists all the candidate objects on your system. The last object selector entered on the command is the first one used when determining whether or not an object matches a selector. generic object selectors with the broadest scope. Object ABC AB A DEFG DEF DE D Candidate objects on system Library LIBX LIBX LIBX LIBX LIBX LIBX LIBX Object type *FILE *SBSD *OUTQ *PGM *PGM *DTAARA *CMD Table 53 represents the object selectors chosen on the object selection parameter. such as ABC*.Object selection for Compare and Synchronize commands The data in the following tables show how candidate objects would be processed in order to achieve a resultant list of objects. Candidate objects selected by object selectors Object DEFG *ALL Library LIBX LIBX Object type *PGM *PGM Include or omit *INCLUDE *OMIT Selected candidate objects DEFG DEF Sequence Processed 5 4 370 . Table 52. should be specified ahead of more specific generic entries. the shaded area shows the objects identified by the combination of the Object path name and Subtree elements of the Object parameter for an IFS command. Circled objects represent the final list of objects selected for processing. Candidate objects selected by object selectors Object ABC* D* A* Library LIBX LIBX LIBX Object type *ALL *ALL *ALL Include or omit *OMIT *INCLUDE *INCLUDE Selected candidate objects ABC D. AB Sequence Processed 3 2 1 Table 55 represents the included objects from Table 54. Table 55. 371 . Object A AB D DE DEFG Resultant list of objects to be processed Library LIBX LIBX LIBX LIBX LIBX Object type *OUTQ *SBSD *CMD *DTAARA *PGM Example subtree In the following graphics.Object selection examples Table 54. DE A. This filtered set of candidate objects is the resultant list of objects to be processed by the CMPOBJA command. a pattern value of *ALL. and an object type of *ALL. and an object type of *ALL.Object selection for Compare and Synchronize commands Figure 26 illustrates a path name value of /corporate/accounting. a subtree specification of *NONE. The candidate objects selected include /corporate/accounting and all descendants. Figure 26. Directory of /corporate/accounting/ Figure 27 shows a path name of /corporate/accounting/*. no additional 372 . a pattern value of *ALL. a subtree specification of *ALL. In this case. Object selection examples filtering is performed on the objects identified by the path and subtree. Figure 27. The candidate objects selected consist of the specified objects only. Subtree *NONE for /corporate/accounting/* 373 . a pattern value of *ALL.Object selection for Compare and Synchronize commands Figure 28 displays a path name of /corporate/accounting/*. a subtree specification of *ALL. Subtree *ALL for /corporate/accounting/* 374 . Figure 28. All descendants of /corporate/accounting/* are selected. and an object type of *ALL. a pattern value of $*. and an object type of *ALL.Object selection examples Figure 29 is a subset of Figure 28. and an object type of *ALL. Figure 29. a pattern value of *ALL. Figure 30 specifies a path name of /corporate/accounting. a subtree specification of *NONE. Figure 29 shows a path name of /corporate/accounting. a subtree specification of *ALL. In this 375 . Subtree *NONE for /corporate/accounting Example Name pattern The Name pattern element acts as a filter on the last component of the object path name. where only the specified directory is selected. only the objects in the file system specified will be included. 376 . Figure 30. only those candidate objects which match the generic pattern value ($123. $236. Pattern $* for /corporate/accounting Example subtree for IFS objects In the following graphic. The non-generic part of a path name indicates the file system to be searched.Object selection for Compare and Synchronize commands scenario. Object selection does not cross file system boundaries when processing subtrees with IFS objects. and $895) are selected for processing. When selecting objects in file systems that contain IFS objects. the shaded areas show file systems containing IFS objects. Directory with a subtree containing IFS objects. Examples of specified paths and objects selected for Figure 31 File system Root file system Root file system in independent ASP PARIS Root file system Objects selected /qsyabc /PARIS/qsyabc None Path specified /qsy* /PARIS/* /PARIS* 377 . . The shaded areas are the file systems. Table 56.Object selection examples Figure 31 illustrates a directory with a subtree that contains IFS objects. Table 56 contains examples showing what file systems would be selected with the path names specified and a subtree specification of *ALL. Figure 31. including the data group value (DGDFN). the object selection list section. and the summary section. the Compare File Data (CMPFILDTA) command.Report types and output formats The following compare commands support output in spooled files and in output files (outfiles): the Compare Attributes commands (CMPFILA. It also provides a legend that provides a description of special values used throughout the report. on the other hand. The level of information in the output is determined by the value specified on the Report type parameter. include details about specific attribute differences. and CMPDLOA commands. objects. number of files. and *ALL. IFS objects or DLOs compared. comparison level (CMPLVL). *SUMMARY does not. Spooled files The spooled output is generated when a value of *PRINT is specified on the Output parameter. The *RRN value allows you to output. and the Check DG File Entries (CHKDGFE) command. the Compare Record Count (CMPRCDCNT) command. report type (RPTTYPE). but you are unsure which system contains the correct data. CMPOBJA. Specifying *ALL for the report type will provide you with information found on both *DIF and *SUMMARY reports. Specifying *ALL for the report type will provide you with information found on all objects and attributes that were compared. First. For the CMPFILA.000 objects that failed to compare. The CMPRCDCNT command supports the *DIF and *ALL report types. The spooled output consists of four main sections—the input or header section. the differences section. attributes to compare (CMPATR). CMPIFSA. The CMPFILDTA supports the *DIF and *ALL report types. In this case. actual attributes compared. the levels of output available are *DIF. These values vary by command. The report type of *DIF includes information on objects with detected differences. The output file. The spooled output is a human-readable print format that is intended to be delivered as a report. CMPOBJA. The report type of *DIF includes information on objects with detected differences. *SUMMARY. 378 . CMPDLOA). as well as *RRN. the *RRN value provides information that enables you to display the specific records on the two systems and to determine the system on which the file should be repaired. Using this value can help resolve situations where a discrepancy is known to exist. however. CMPIFSA. is primarily intended for automated purposes such as automatic synchronization. and number of detected differences. It is also a format that is easily processed using SQL queries. A report type of *SUMMARY provides a summary of all objects compared as well as an object-level indication whether differences were detected. the relative record number of the first 1. using the MXCMPFILR outfile format. the header section of the spooled report includes all of the input values specified on the command. and system 1 and system 2 values. a special value of *INDONLY will be displayed in the value columns. or row. A comparison level value of *DIF will list details only for those objects with detected attribute differences. data group name. which provides a one row summary for each object compared. Following the summary row. each attribute compared is listed—along with the status of the attribute and the attribute value. Outfiles The output file is generated when a value of *OUTFILE is specified on the Output parameter. command name. In the event the attribute compared is an indicator. 379 . and provides details on the objects and attributes compared. the level of output in the output file is dependent on the report type value specified on the Report type parameter. Each command is shipped with an outfile template that uses a normalized database to deliver a self-defined record.Report types and output formats The second section of the report is the object selection list. helps define each row. for every attribute you compare. including the attribute type. The summary row indicates the overall status of the object compared. A summary row precedes the attribute rows. timestamp. The normalized database feature ensures that new object attributes can be added to the audit capabilities without disruption to current automation processing. The detail section is the third section of the report. and will begin with a summary status that indicates whether or not differences were detected. Similar to the spooled output. A report type value of *ALL will list all objects compared. Each row includes an indicator that indicates whether or not attribute differences were detected. Similar to the header section. The fourth section of the report is the summary. The level of detail in this section is determined by the report type specified on the command. A value of *SUMMARY will not include the detail section for any object. This section lists all of the object selection entries specified on the comparison command. Key information. it provides details on the input values specified on the command. The template files for the various commands are located in the MIMIX product library. Each command generates a candidate list of objects on both systems and can detect objects missing from either system. constraints. or characteristics. Together. of the objects within your environment and report on the status of replicated objects. • • • • About the Compare Attributes commands With the Compare Attributes commands (CMPFILA. The topics in this chapter include: • “About the Compare Attributes commands” on page 380 describes the unique features of the Compare Attributes commands (CMPFILA. the compare attributes commands provide robust functionality to help you determine whether your system is in a state to ensure a successful rollover for planned events or failover for unplanned events. the command checks for the existence of the object on the source and target systems and then compares the attributes specified on the command. authority. and CMPDLOA). and CMPDLOA. these command are collectively referred to as the compare attributes commands. deleted records. When used in combination with the automatic recovery features in MIMIX AutoGuard. CMPOBJA. CMPOBJA. “Comparing DLO attributes” on page 393 includes the procedure to compare DLO attributes. CMPIFSA. and Compare DLO Attributes (CMPDLOA). You may already be using the compare attributes commands when they are called by audit functions within MIMIX AutoGuard. the attributes to be compared. database relationships. and the like. CMPIFSA.Comparing attributes CHAPTER 18 Comparing attributes This chapter describes the commands that compare attributes: Compare File Attributes (CMPFILA). The results from the comparisons performed are placed in a report. “Comparing object attributes” on page 387 includes the procedure to compare object attributes. it does check attributes such as record counts. These commands are designed to audit the attributes. and the format in which the resulting report is created. “Comparing IFS object attributes” on page 390 includes the procedure to compare IFS object attributes. Although the CMPFILA command does not specifically compare the data within the database file. ownership. “Comparing file and member attributes” on page 384 includes the procedure to compare the attributes of files and members. Each command offers several unique features as well. 380 . Compare IFS Attributes (CMPIFSA). Compare Object Attributes (CMPOBJA). For each object compared. • CMPFILA provides significant capability to audit file-based attributes such as triggers. you have significant flexibility in selecting objects for comparison. you define a name space—the library for CMPFILA or CMPOBJA. all of the objects of the same class as the command that are within the name space configured for the data group are compared. This parameter is ignored when a data group is specified. respectively. the object selection parameters. the value specified on the Member element is only used when *MBR is also specified on the Comparison level parameter. Detailed information about object selection is available in “Object selection for Compare and Synchronize commands” on page 360. By object selection parameters only: You can compare objects that are not replicated by a data group. Extended attributes are attributes unique to given objects. see “CMPFILA supported object attributes for *FILE objects” on page 383 and “CMPOBJA supported object attributes for *FILE objects” on page 383. including extended attributes. or the directory path for CMPIFSA or CMPDLOA. • Choices for selecting objects to compare You can select objects to compare by using a data group. By specifying *NONE for the data group and specifying objects on the object selection parameters. Object attribute: The Object attribute element enables you to select particular characteristics of an object or file. such as auto-start job entries for subsystems. The CMPIFSA and CMPDLOA commands provide enhanced audit capability for IFS objects and DLOs. the values specified in object selection parameters act as a filter for the items defined to the data group. Comparing these attributes provides you with assurance that files are most likely synchronized. • The CMPOBJA command supports many attributes important to other librarybased objects. • System 2: The System 2 parameter identifies the remote system name. For example. The compare attributes commands do not require active data groups to run.About the Compare Attributes commands and others that check the size of data within a file. • By data group only: If you specify only by data group. • • Unique parameters The following parameters for object selection are unique to the compare attributes commands and allow you to specify an additional level of detail when comparing objects or files. specifying a data group on the CMPIFSA command would compare all IFS objects in the name space created by data group IFS entries associated with the data group. For details. or both. and represents the system to which objects on the local system are compared. since the system 2 381 . By data group and object selection parameters: When you specify a data group name as well as values on the object selection parameters. and provides a level of filtering. Unique File and Object elements: The following are unique elements on the File parameter (CMPFILA command) and Objects parameter (CMPOBJA command): • Member: On the CMPFILA command. There are many cases where a specific comparison attributes are only valid for a particular object type. Comparison attributes not supported by a given object type are ignored. System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only): The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. A value is required if no data group is specified. Comparison level (CMPFILA only): The Comparison level parameter indicates whether attributes are compared at the file level or at the member level. Therefore.Comparing attributes information is derived from the data group.#IFSATR audit” on page 569 “Attributes compared and expected results . For example.#FILATR. For all other object types selected as a result of running the 382 . CMPOBJA supports a large number of object types and related comparison attributes.#DLOATR audit” on page 571 All comparison attributes supported by a specific compare attribute command may not be applicable for all object types supported by the command. auto-start job entries is a valid comparison attribute for object types of subsystem description (*SBSD). such as the create timestamp (CRTTSP) attribute. The extended set of attributes includes the basic set of attributes and some additional attributes. The comparison commands take this factor into consideration and check the creation date for only those objects whose values are retained during replication. The following topics list the supported attributes for each command: • • • • “Attributes compared and expected results . #FILATRMBR audits” on page 556 “Attributes compared and expected results . the creation date established on the source system is not maintained on the target system during the replication process. all supported attributes. Each command provides the ability to select pre-determined sets of attributes (basic or extended). for example. Choices for selecting attributes to compare The Attributes to compare parameter allows you to select which combination attributes to compare. Some objects. Each compare attribute command supports an extensive list of attributes. The ASP group name is the name of the primary ASP device within the ASP group. The basic set of attributes is intended to compare attributes that provide an indication that the objects compared are the same. For example. as well as any other unique combination of attributes that you require. cannot be replicated using IBM's save and restore technology.#OBJATR audit” on page 561 “Attributes compared and expected results . This parameter is ignored when a data group is specified. while avoiding attributes that may be different but do not provide a valid indication that objects are not synchronized. and PF38-DTA Files of type PF38-DTA Files of type PF38-SRC Object attribute *ALL LF LF38 PF PF-DTA PF-SRC PF38 PF38-DTA PF38-SRC CMPOBJA supported object attributes for *FILE objects When you specify a data group to compare. When differences are detected on attributes of such an object. Any value is supported. the CMPFILA command obtains information from the configured data group entries for all PF and LF files and their subtypes. they are reported as *EC (equal configuration) instead of being reported as *NE (not equal). its attribute changes are not replicated. and PF-DTA Files of type PF-DTA Files of type PF-SRC Files of type PF38. Those objects defined to the data group object entries are compared. Table 57 lists the extended attributes for objects of type *FILE that are supported as values on the Object attribute element. configuration data is used when comparing objects that are identified for replication through the system journal. Those files that are within the name space created by data group entries are compared. which represents the entire list of supported attributes. CMPFILA supported extended attributes for *FILE objects Description All physical and logical file types are selected for processing Logical file Files of type LF38 Physical file types. CMPFILA supported object attributes for *FILE objects When you specify a data group to compare.About the Compare Attributes commands report. If a data group is specified on a compare request. The default value on the Object attribute element is *ALL. PF-SRC. 383 . Table 57. If an object’s configured object auditing value (OBJAUD) is *NONE. For *FILE objects configured for replication through the system journal and configured to omit T-ZC journal entries. including PF38. but a list of recommended attributes is available in the online help. the auto-start job entry attribute is ignored for object types that are not of type *SBSD. also see “Omit content (OMTDTA) and comparison commands” on page 352. including PF. PF38-SRC. the CMPOBJA command obtains data group information from the data group object entries. do the following: 1. Verify. At the File prompts. At the Data group definition prompts. e. You can specify as many as 300 object selectors by using the + for more prompt. accept the defaults. From the MIMIX Intermediate Main Menu. For each selector. At the Member prompt. Note: If you have automation programs monitoring escape messages for differences in file attributes. 2. if the file and library names on system 2 are equal to system 1. be aware that differences due to active replication (Step 16) are signaled via a new difference indicator (*UA) and escape message. select option 1 (Compare file attributes) and press Enter. To compare the attributes of files and members. At the File and library prompts. specify the name or the generic value you want. and synchronize menu) and press Enter. b. do one of the following: • To compare attributes for all files defined by the data group file entries for a particular data group definition. At the Include or omit prompt. 384 . To compare files by name only. specify the data group name and continue with the next step. f. At the Object attribute prompt. • • 4. select option 12 (Compare. Otherwise. You can optionally specify that results of the comparison are placed in an outfile. specify *NONE and continue with the next step. d. For more information. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. and Synchronize menu. do the following: a. At the System 2 file and System 2 library prompts. c. specify the value you want. The Compare File Attributes (CMPFILA) command appears. specify the name of the file and library to which files on the local system are compared. see “Object selection for Compare and Synchronize commands” on page 360. specify the data group name and skip to Step 6. Press Enter. See the auditing and reporting topics in this book. you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3.Comparing file and member attributes You can compare file attributes to ensure that files and members needed for replication exist on both systems or any time you need to verify that files are synchronized between systems. 3. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. From the MIMIX Compare. verify. accept *ALL or specify a member name to compare a particular member within a file. To compare a subset of files defined to a data group. At the File to receive output prompts. Press F4 to see a valid list of attributes. specify *BOTH and press Enter. You can also specify *NONE. specify the name of the ASP group that contains objects to be compared on system 2. Otherwise. specify the level of detail for the output report. At the Comparison level prompt.Comparing file and member attributes 5. specify the file and library to receive the output. 10. accept the default to compare files at a file level only. or enter the attributes to exclude from the comparison. Note: This parameter is ignored when a data group definition is specified. At the System 2 ASP group prompt. b. To generate an outfile. Skip to Step 14. 13. 6. accept *NONE to compare all attributes specified in Step 7. At the Attributes to compare prompt. Note: If *FILE is specified. (Press F1 (Help) to see the name of the supplied database file. At the Member to receive output prompt. which indicates that comparisons should occur without consideration for replication in progress. 14. The System 2 parameter prompt appears if you are comparing files not defined to a data group. At the Attributes to omit prompt. specify the name of the database file member to receive the output of the command. 12. specify the name of the remote system to which files on the local system are compared. do one of the following • • • To generate print output. The User data prompt appears if you selected *PRINT or *BOTH in Step 12. specify the name of the ASP group that contains objects to be compared on system 1. specify whether new records should replace existing file members or be added to the existing list. the Member prompt is ignored (see Step 4b). accept *PRINT and press Enter. At the Output member options prompts. Note: This parameter is ignored when a data group definition is specified. or accept *DFT to use the default maximum time of 300 seconds (5 minutes). 385 . specify *MBR to compare files at a member level. accept the default if no objects from any ASP group are to be compared on system 2. specify the maximum amount of time between when a file in the data group changes and when replication of the change is expected to be complete. Otherwise. If necessary. 9. 7. Accept the default to use the command name to identify the spooled output or specify a unique name. 11. Skip to Step 14. Skip to Step 18. accept the default if no objects from any ASP group are to be compared on system 1. At the Replace or add prompt. At the Output prompt. Otherwise. do the following: a. specify *OUTFILE and press Enter. accept *BASIC to compare a pre-determined set of attributes based on whether the comparison is at a file or member level or press F4 to see a valid list of attributes.) 15. 16. At the System 1 ASP group prompt. 8. To generate both print output and an outfile. At the Maximum replication lag prompt. At the Report type prompt. At the Submit to batch prompt. 20. At the Job description and Library prompts. Press Enter continue with the next step. specify the name and library of the job description used to submit the batch request. To submit the job for batch processing. specify *CMD to use the command name to identify the job or specify a simple name.Note: This parameter is only valid when a data group is specified in Step 3. When used as part of shipped rules. press Enter. To start the comparison. the default value is *OMIT since the results are already placed in an outfile. specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log. At the Job name prompt. 21. and is the default used outside of shipped rules. 386 . 17. 18. At the Object difference messages prompt. do one of the following: • • If you do not want to submit the job for batch processing. specify *NO and press Enter to start the comparison. 19. accept the default. For more information. Note: The System 2 file and System 2 library values are ignored if a data 387 . specify the data group name and skip to Step 6. if the object and library names on system 2 are equal to system 1. specify the value you want. You can specify as many as 300 object selectors by using the + for more prompt. c. 2. see “Object selection for Compare and Synchronize commands” on page 360. The Compare Object Attributes (CMPOBJA) command appears. accept *ALL or specify a specific object type to compare. At the Object attribute prompt. verify. b. specify the data group name and skip to continue with the next step. do one of the following: • To compare attributes for all objects defined by the data group object entries for a particular data group definition. and synchronize menu) and press Enter. Verify. e. accept the defaults. 3. To compare the attributes of objects.Comparing object attributes Comparing object attributes You can compare object attributes to ensure that objects needed for replication exist on both systems or any time you need to verify that objects are synchronized between systems. and Synchronize menu. be aware that differences due to active replication (Step 15) are signaled via a new difference indicator (*UA) and escape message. From the MIMIX Intermediate Main Menu. For each selector. To compare a subset of objects defined to a data group. select option 12 (Compare. See the auditing and reporting topics in this book. Note: If you have automation programs monitoring escape messages for differences in object attributes. select option 2 (Compare object attributes) and press Enter. At the Include or omit prompt. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. At the Object type prompt. specify *NONE and continue with the next step. At the Object and library prompts. Otherwise. At the Data group definition prompts. specify the name or the generic value you want. you can specify elements for one or more object selectors that either identify objects to compare or that act as filters to the objects defined to the data group indicated in Step 3. d. At the System 2 file and System 2 library prompts. From the MIMIX Compare. You can optionally specify that results of the comparison are placed in an outfile. do the following: a. specify the name of the object and library to which objects on the local system are compared. To compare objects by object name only. • • 4. do the following: 1. At the Object prompts. The User data prompt appears if you selected *PRINT or *BOTH in Step 11. specify the name of the remote system to which objects on the local system are compared. Note: This parameter is ignored when a data group definition is specified. 12. do the following: a. 10. Note: This parameter is only valid when a data group is specified in Step 3. 8. Press Enter. If necessary. Otherwise. Press F4 to see a valid list of attributes. specify the name of the ASP group that contains objects to be compared on system 1. b. At the Replace or add prompt. 6. specify the maximum amount of time between when an object in the data group changes and when replication of the change is expected to be complete. f.) 14. At the Report type prompt. or enter the attributes to exclude from the comparison. (Press F1 (Help) to see the name of the supplied database file. 9. Note: This parameter is ignored when a data group definition is specified. At the File to receive output prompts. specify the name of the ASP group that contains objects to be compared on system 2. Accept the default to use the command name to identify the spooled output or specify a unique name. 7. At the Output prompt. accept *NONE to compare all attributes specified in Step 6. To generate an outfile. At the System 2 ASP group prompt. At the Attributes to omit prompt. accept *PRINT and press Enter. At the Attributes to compare prompt. specify the file and library to receive the output. do one of the following • • • To generate print output.group is specified on the Data group definition prompts. accept the default if no objects from any ASP group are to be compared on system 1. 5. specify *BOTH and press Enter. At the Member to receive output prompt. specify the level of detail for the output report. 11. 388 . To generate both print output and an outfile. The System 2 parameter prompt appears if you are comparing objects not defined to a data group. Skip to Step 13. accept the default if no objects from any ASP group are to be compared on system 2. Skip to Step 17. or accept *DFT to use the default maximum time of 300 seconds (5 minutes). accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. Otherwise. specify the name of the database file member to receive the output of the command. You can also specify *NONE. Skip to Step 13. specify whether new records should replace existing file members or be added to the existing list. 15. At the Maximum replication lag prompt. 13. At the Output member options prompts. specify *OUTFILE and press Enter. At the System 1 ASP group prompt. which indicates that comparisons should occur without consideration for replication in progress. At the Job name prompt. 17. Press Enter and continue with the next step. press Enter. To submit the job for batch processing. 19. specify *NO and press Enter to start the comparison. When used as part of shipped rules. the default value is *OMIT since the results are already placed in an outfile. To start the comparison. At the Job description and Library prompts. 20. accept *CMD to use the command name to identify the job or specify a simple name. At the Submit to batch prompt. accept the default.Comparing object attributes 16. do one of the following: • • If you do not want to submit the job for batch processing. 18. specify the name and library of the job description used to submit the batch request. 389 . At the Object difference messages prompt. and is the default used outside of shipped rules. specify whether you want detail messages placed in the job log. The value *INCLUDE places detail messages in the job log. At the Include or omit prompt.Comparing IFS object attributes You can compare IFS object attributes to ensure that IFS objects needed for replication exist on both systems or any time you need to verify that IFS objects are synchronized between systems. specify a value if you want to place an additional filter on the last component of the IFS object path name. b. From the MIMIX Compare. accept *ALL or specify the name or the generic value you want. you can specify elements for one or more object selectors that either identify IFS objects to compare or that act as filters to the IFS objects defined to the data group indicated in Step 3. You can specify as many as 300 object selectors by using the + for more prompt. At the Data group definition prompts. do the following: 1. See the auditing and reporting topics in this book. Verify. For each selector. verify. accept *ALL or specify a specific IFS object type to compare. At the Object type prompt. Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts. To compare a subset of IFS objects defined to a data group. Note: If you have automation programs monitoring for differences in IFS object attributes. c. and synchronize menu) and press Enter. select option 12 (Compare. accept *NONE or specify *ALL to define the scope of IFS objects to be processed. specify the value you want. For more information. • • 4. specify the data group name and continue with the next step. The Compare IFS Attributes (CMPIFSA) command appears. do one of the following: • To compare attributes for all IFS objects defined by the data group IFS object entries for a particular data group definition. specify *NONE and continue with the next step. 390 . e. d. 3. To compare the attributes of IFS objects. At the IFS objects prompts. You can optionally specify that results of the comparison are placed in an outfile. do the following: a. 2. At the Name pattern prompt. and Synchronize menu. At the Directory subtree prompt. select option 3 (Compare IFS attributes) and press Enter. see “Object selection for Compare and Synchronize commands” on page 360. be aware that differences due to active replication (Step 13) are signaled via a new difference indicator (*UA) and escape message. specify the data group name and skip to Step 6. At the Object path name prompt. From the MIMIX Intermediate Main Menu. To compare IFS objects by object path name only. 14. If necessary. specify whether you want detail messages placed in the job log. 9. 11.Comparing IFS object attributes f. specify *OUTFILE and press Enter. specify the file and library to receive the output. At the Object difference messages prompt. To generate an outfile. Note: The System 2 object path name and System 2 name pattern values are ignored if a data group is specified on the Data group definition prompts. Press Enter. accept *NONE to compare all attributes specified in Step 6. At the Report type prompt. At the System 2 object path name and System 2 name pattern prompts. specify the name of the remote system to which IFS objects on the local system are compared. When used as part of 391 . At the Output member options prompts. Skip to Step 11. At the Member to receive output prompt. 5. do the following: a. 8. (Press F1 (Help) to see the name of the supplied database file. At the Output prompt. which indicates that comparisons should occur without consideration for replication in progress. specify the name of the database file member to receive the output of the command. 6. or accept *DFT to use the default maximum time of 300 seconds (5 minutes). and is the default used outside of shipped rules. To generate both print output and an outfile. At the Maximum replication lag prompt. The value *INCLUDE places detail messages in the job log. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. specify *BOTH and press Enter. Skip to Step 11. Otherwise. Note: This parameter is only valid when a data group is specified in Step 3. At the Attributes to omit prompt. b. Press F4 to see a valid list of attributes. 7. 13.) 12. or enter the attributes to exclude from the comparison. do one of the following • • • To generate print output. accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. accept *PRINT and press Enter. At the File to receive output prompts. specify the name of the path name and pattern to which IFS objects on the local system are compared. At the Attributes to compare prompt. specify the maximum amount of time between when an IFS object in the data group changes and when replication of the change is expected to be complete. The System 2 parameter prompt appears if you are comparing IFS objects not defined to a data group. if the IFS object path name and name pattern on system 2 are equal to system 1. You can also specify *NONE. Skip to Step 15. specify whether new records should replace existing file members or be added to the existing list. Accept the default to use the command name to identify the spooled output or specify a unique name. At the Replace or add prompt. g. specify the level of detail for the output report. 10. accept the defaults. At the Submit to batch prompt. 18. 392 . To start the comparison. specify *NO and press Enter to start the comparison. 16. Press Enter continue with the next step. accept the default. do one of the following: • • If you do not want to submit the job for batch processing. 17. the default value is *OMIT since the results are already placed in an outfile. specify the name and library of the job description used to submit the batch request. press Enter. At the Job description and Library prompts.shipped rules. At the Job name prompt. To submit the job for batch processing. accept *CMD to use the command name to identify the job or specify a simple name. 15. verify. At the Folder subtree prompt. At the Name pattern prompt.Comparing DLO attributes Comparing DLO attributes You can compare DLO attributes to ensure that DLOs needed for replication exist on both systems or any time you need to verify that DLOs are synchronized between systems. To compare DLOs by path name only. d. Note: If you have automation programs monitoring escape messages for differences in DLO attributes. To compare a subset of DLOs defined to a data group. do one of the following: • To compare attributes for all DLOs defined by the data group DLO entries for a particular data group definition. c. • • 4. specify *NONE and continue with the next step. At the Data group definition prompts. For more information. From the MIMIX Compare. 2. and Synchronize menu. 393 . See the auditing and reporting topics in this book. At the Owner prompt. do the following: 1. be aware that differences due to active replication (Step 13) are signaled via a new difference indicator (*UA) and escape message. specify the data group name and continue with the next step. Note: The *ALL default is not valid if a data group is specified on the Data group definition prompts. and synchronize menu) and press Enter. At the DLO path name prompt. see “Object selection for Compare and Synchronize commands” on page 360. 3. accept *ALL or specify the name or the generic value you want. At the DLO type prompt. You can specify as many as 300 object selectors by using the + for more prompt. For each selector. do the following: a. accept *NONE or specify *ALL to define the scope of IFS objects to be processed. The Compare DLO Attributes (CMPDLOA) command appears. specify the data group name and skip to Step 6. select option 12 (Compare. From the MIMIX Intermediate Main Menu. At the Document library objects prompts. accept *ALL or specify a specific DLO type to compare. select option 4 (Compare DLO attributes) and press Enter. You can optionally specify that results of the comparison are placed in an outfile. specify a value if you want to place an additional filter on the last component of the DLO path name. b. To compare the attributes of DLOs. e. accept *ALL or specify the owner of the DLO. Verify. you can specify elements for one or more object selectors that either identify DLOs to compare or that act as filters to the DLOs defined to the data group indicated in Step 3. At the Object difference messages prompt. specify the maximum amount of time between when a DLO in the data group changes and when replication of the change is expected to be complete. specify the level of detail for the output report. do one of the following • • • To generate print output. Note: The System 2 DLO path name and System 2 DLO name pattern values are ignored if a data group is specified on the Data group definition prompts. specify the name of the path name and pattern to which DLOs on the local system are compared. if the DLO path name and name pattern on system 2 are equal to system 1. (Press F1 (Help) to see the name of the supplied database file. At the Report type prompt. which indicates that comparisons should occur without consideration for replication in progress. If necessary. or accept *DFT to use the default maximum time of 300 seconds (5 minutes).f. 10. At the Maximum replication lag prompt. The System 2 parameter prompt appears if you are comparing DLOs not defined to a data group. specify the name of the remote system to which DLOs on the local system are compared. h. b. specify *OUTFILE and press Enter. 6. At the File to receive output prompts. Press F4 to see a valid list of attributes. specify whether you want detail messages placed in the job log. accept *PRINT and press Enter. Skip to Step 11. g. 11.) 12. 8. accept the defaults. At the Output prompt. 13. 7. do the following: a. The value *INCLUDE places detail messages in 394 . specify the name of the database file member to receive the output of the command. At the Attributes to compare prompt. You can also specify *NONE. At the Output member options prompts. To generate an outfile. Otherwise. specify the file and library to receive the output. Accept the default to use the command name to identify the spooled output or specify a unique name. 14. specify whether new records should replace existing file members or be added to the existing list. At the Replace or add prompt. To generate both print output and an outfile. At the Include or omit prompt. Skip to Step 11. At the System 2 DLO path name and System 2 DLO name pattern prompts. accept *NONE to compare all attributes specified in Step 6. accept *BASIC to compare a pre-determined set of attributes or press F4 to see a valid list of attributes. At the Attributes to omit prompt. Skip to Step 15. or enter the attributes to exclude from the comparison. 9. specify *BOTH and press Enter. specify the value you want. Note: This parameter is only valid when a data group is specified in Step 3. The User data prompt appears if you selected *PRINT or *BOTH in Step 9. Press Enter. 5. At the Member to receive output prompt. specify the name and library of the job description used to submit the batch request. Press Enter continue with the next step. 17. the default value is *OMIT since the results are already placed in an outfile. At the Job name prompt. accept *CMD to use the command name to identify the job or specify a simple name. At the Job description and Library prompts. do one of the following: • • If you do not want to submit the job for batch processing. At the Submit to batch prompt. 18. press Enter. specify *NO and press Enter to start the comparison. To submit the job for batch processing. 16. To start the comparison. When used as part of shipped rules. 395 . 15. and is the default used outside of shipped rules. accept the default.Comparing DLO attributes the job log. “Advanced subset options for CMPFILDTA” on page 410 describes how to use the capability provided by the Advanced subset options (ADVSUBSET) parameter. as well as comparing records with unique keys.basic procedure (non-active)” on page 415 describes how to compare file data in a data group that is not active.Comparing file record counts and file member data CHAPTER 19 Comparing file record counts and file member data This chapter describes the features and capabilities of the Compare Record Counts (CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command. triggers. use with firewalls. The topics in this chapter include: • • • “Comparing file record counts” on page 396 describes the CMPRCDCNT command and provides a procedure for performing the comparison. “Comparing and repairing file member data . “Comparing and repairing file member data . This topic also describes considerations for security. “Ending CMPFILDTA requests” on page 414 describes how to end a CMPFILDTA request that is in progress and describes the results of ending the job. comparing records that are not allocated.basic procedure” on page 418 describes how to compare and repair file data in a data group that is not active. This command compares the number of current records (*CURRDS) and the number of 396 . and constraints. • • • • • • • • Comparing file record counts The Compare Record Counts (CMPRCDCNT) command allows you to compare the record counts of members of a set of physical files between two systems. “Comparing file member data . “Specifying CMPFILDTA parameter values” on page 404 provides additional information about the parameters for selecting file members to compare and using the unique parameters of this command. “Considerations for using the CMPFILDTA command” on page 400 describes recommendations and restrictions of the command. “Comparing file member data using active processing technology” on page 424 describes how to use active processing to compare file member data. “Comparing file member data using subsetting options” on page 427 describes how to use the subset feature of the CMPFILDTA command to compare a portion of member data at one time.members on hold (*HLDERR)” on page 421 describes how to compare and repair file members that are held due to error using active processing. “Significant features for comparing file member data” on page 399 identifies enhanced capabilities available for use when comparing file member data. the #MBRRCDCNT audit does not have an associated recovery phase. • 3. Any repairs must be undertaken manually. this capability provides a less-intensive means to gauge whether files are likely to be synchronized. Differences detected by this audit appear as not recovered in the Audit Summary user interfaces. The Compare Record Counts (CMPRCDCNT) display appears. To check for file data differences. User journal replication processes must be active when this command is used. To check for attribute differences. use the Compare File Attributes (CMPFILA) command. Members on both systems can be actively modified by applications and by MIMIX apply processes while this command is running. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information. To compare file record counts Do the following to compare record counts for an active data group: 1. 2. For each selector. type installation_library/CMPRCDCNT and press F4 (Prompt). Members to be processed must be defined to a data group that permits replication from a user journal. Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems. use the Compare File Data (CMPFILDTA) command.Comparing file record counts deleted records (*NBRDLTRCDS) for members of physical files that are defined for replication by an active data group. For information about the results of a comparison. Unlike other audits. Note: Equal record counts suggest but do not guarantee that members are synchronized. At the Data group definition prompts. in the following ways: • • • In MIMIX Availability Manager. repair actions are available for specific errors when viewing the output file for the audit. specify the data group name and skip to Step 4. do the following: 397 . From a command line. specify the data group name and continue with the next step. do one of the following: • To compare data for all files defined by the data group file entries for a particular data group definition. At the File prompts. The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase. Run the #FILDTA audit for the data group to detect and correct problems. In resource-constrained environments. Journaling is required on the source system. To compare a subset of files defined to a data group. see “Object selection for Compare and Synchronize commands” on page 360. you can specify elements for one or more object selectors to act as filters to the files defined to the data group indicated in Step 2. see “What differences were detected by #MBRRCDCNT” on page 550. 4. To generate an outfile and spooled output that is printed. accept *CMD to use the command name to identify the job or specify a simple name. 12. At the Object difference messages prompt. (Press F1 (Help) to see the name of the supplied database file. b. specify *BOTH. do one of the following: • • • • To generate spooled output that is printed. At the Member to receive output prompt.) 7. At the File to receive output prompts.Comparing file record counts and file member data a. specify the value you want. Press Enter and continue with the next step. If you do not want to generate output. do one of the following: • • If you want all compared objects to be included in the report. At the Replace or add prompt. 11. To start the comparison. c. At the Member prompt. At the Output prompt. 5. do one of the following: • • If you do not want to submit the job for batch processing. At the Submit to batch prompt. At the Report type prompt. 9. press Enter. accept the default. At the Output member options prompts. specify *NONE. b. 8. 10. 6. Press Enter continue with the next step. To generate an outfile. specify *DIF. 398 . accept *ALL or specify a member name to compare a particular member within a file. specify *NO and press Enter to start the comparison. specify the name or the generic value you want. specify whether you want detail messages placed in the job log. Press Enter and continue with the next step. The value *INCLUDE places detail messages in the job log. accept the default. Press Enter and continue with the next step. specify *OUTFILE. Press Enter and skip to Step 9. At the Job description and Library prompts. At the Job name prompt. specify the name and library of the job description used to submit the batch request. specify the file and library to receive the output. specify whether new records should replace existing file members or be added to the existing list. When used as part of shipped rules. specify the name of the database file member to receive the output of the command. do the following: a. accept the default. At the Include or omit prompt. If you only want objects with detected differences to be included in the report. and is the default used outside of shipped rules. *PRINT. the default value is *OMIT since the results are already placed in an outfile. At the File and library prompts. To submit the job for batch processing. Unique features of the CMPFILDTA command include active server technology and isolated data correction capability. these features enable the detection and correction of file members that are not synchronized while applications and replication processes remain active. (In contrast. the Synchronize DG File Entry (SYNCDGFE) command would resynchronize the file by transferring all data for the file from the source system to the target system. In non-active mode. You can also use the CMPFILDTA command interactively or call it from a program. the CMPFILDTA command works cooperatively with the database apply 399 . the CMPFILDTA command provides the ability to resynchronize the file at the record level by sending only the data for the incorrect member to the target system. the command will also repair the target file member if *YES was specified on the Repair parameter. Instead. During active processing of a member. In active mode. CMPFILDTA checks the mismatched records against the activity that is happening on the source system and the apply activity that is occurring on the target. CMPFILDTA will then report the error. This list is not reported. The CMPFILDTA command is called programmatically by MIMIX AutoGuard functions that help you determine whether files are synchronized and whether your MIMIX environment is prepared for switching. processing begins in the same manner. At that time. Two modes of operation are available: active and non-active. the DB apply threshold (DBAPYTHLD) parameter can be used to specify what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process. however. When files are not synchronized. Repairing data You can optionally choose to have the CMPFILDTA command repair differences it detects in member data between systems. If there is a member that needs repair. CMPFILDTA assumes that all files are quiesced and performs file comparisons and repairs without regard to application or replication activity. Together. performing an internal compare and generating a list of records that are not synchronized.Significant features for comparing file member data Significant features for comparing file member data The Compare File Data (CMPFILDTA) command provides ability to compare data within members of physical files. When members in *HLDERR status are processed.) Active and non-active processing The Process while active (ACTIVE) parameter determines whether a requested comparison can occur while application and replication activity is present. Processing members held due to error The CMPFILDTA command also provides the ability to compare and repair members being held due to error (*HLDERR). File members that are held due to an error can also be compared and repaired. The member then changes to *ACTIVE status if compare and repair processing is successful.CMPFILDTA requires a TCP/IP transfer definition—you cannot use SNA. The CMPFILDTA command cannot compare files configured for *KEYED replication. you should be aware of the information in this topic. and incoming journal entries continue to be applied in forgiveness mode. Keyed replication . multiple “thread groups” break up the file into smaller units of work. restore them to an active state. but then you must override CMPFILDTA to 400 . To repair members in *HLDERR status. To support the cooperative efforts of CMPFILDTA and DBAPY.The file in *HLDERR status has been released. Use the CMPFILA command to determine whether you have a matching set of files and attributes on both systems and use the CMPFILDTA command to compare the actual data within the files.The journal entry backlog has been applied. In the event that compare and repair processing is unsuccessful. SNA environments . When a member held due to error is being processed by the CMPFILDTA command. the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. Parallel processing uses multi-threaded jobs to break up file processing into smaller groups for increased throughput. DBAPY will clear the journal entry backlog by applying the file entries in catch-up mode. the command only supports files configured for *POSITIONAL replication. the member-level entry is set back to *HLDERR. Subsetting and advanced subsetting options provide a significant degree of flexibility for performing periodic checks of a portion of the data within a file. Additional features The CMPFILDTA command incorporates many other features to increase performance and efficiency.Comparing file record counts and file member data (DBAPY) process to compare and repair the file members—and when possible. Recommendations and restrictions It is recommended that the CMPFILDTA command be used in tandem with the CMPFILA command. you must also specify that the repair be performed on the target system and request that active processing be enabled. You can be configured for SNA. *CMPACT .Although you can run the CMPFILDTA command on keyed files. the following transitional states are used for file entries undergoing compare and repair processing: • • *CMPRLS . CMPFILDTA and DBAPY are cooperatively repairing the member previously in *HLDERR status. Considerations for using the CMPFILDTA command Before you use the CMPFILDTA command. This technology can benefit environments with multiple processors as well as systems with a single processor. Rather than having a single-threaded job on each system. you should open at least 10 ports in your firewall—minimally. Using the CMPFILDTA command with firewalls The CMPFILDTA command uses a communications port based on the port number specified in the transfer definition. records are added to make the members the same size. or has exceeded a threshold limit. a distinct escape message is issued. CMPFILDTA starts a remote process using RUNCMD. you must have read access on both systems. Second. If the member to be repaired is the larger of the two members. no error is generated nor is the member placed on error hold. which requires two conditions to be true. One member may have allocated records. If active processing and repair is requested. 401 . When MIMIX replication encounters these situations. only read access is needed. the excess records are deleted. the repair processing would be done by the database apply process. those jobs will fail. For example. ports 5001 through 5010. Comparing allocated records to records not yet allocated In some situations. CMPFILDTA builds upon the RUNCMD support in MIMIX. if the port number in your transfer definition is 5000 and you want to run 10 CMPFILDTA jobs at once. For more information. while the corresponding records of the other member are not yet allocated. If the member to be repaired is the smaller of the two members.Do not compare data using active processing technology if the apply process is 180 seconds or more behind. you may wish to monitor these escape messages specifically. however. you must open the equivalent number of ports in your firewall. members differ in the number of records allocated. the user profile of the job that is invoking CMPFILDTA must exist on the remote system and have the same password on the remote system as it does on the local system.Considerations for using the CMPFILDTA command refer to a transfer definition. Apply threshold and apply backlog . write access on the system to be repaired may also be necessary when active technology is not used. In this case. If one or more members differ in the manner described above. To compare file data. If you use CMPFILDTA in a CL program. see “System-level communications” on page 140. If you attempt to run more jobs than there are open ports. First. When using the repair function. If you need to run simultaneous CMPFILDTA jobs. Security considerations You should take extra precautions when using CMPFILDTA’s repair function. as it is capable of accessing and modifying data on your system. the user profile must have appropriate read or update access to the members to be compared or repaired. Table 58. update. and delete Update. Triggers should be disabled if this action is not desired by the user. insert. and delete Update. active triggers. and constraints If members being repaired have unique keys. An updated or insert repair action that results in one or more duplicate key exceptions automatically results in the deletion of records with duplicate keys. Table 58 describes the interaction of triggers with CMPFILDTA repair and active processing. and delete Avoiding issues with triggers It is possible to avoid potential trigger restrictions.Comparing file record counts and file member data Comparing files with unique keys. CMPFILDTA and trigger support Trigger activation group (ACTGRP) *NEW NAMED or *CALLER *NEW *NEW *NEW NAMED or *CALLER CMPFILDTA Repair on system (REPAIR) Any value Any value *NONE Any value other than *NONE Any value other than *NONE Any value CMPFILDTA Process while active (ACTIVE) Any value Any value Any value *NO *YES Any value CMPFILDTA support Not supported Supported Supported Not supported Supported Supported Trigger type Read Read Update. insert. special care should be taken. any compare or repair action causes the applicable trigger to be invoked. If triggers are enabled. If repair action is specified. Attention: If an attempt is made to use one of the unsupported situations listed in Table 58. insert. You will see a CEE0200 information message in the job log shortly before the job ends. or constraints. insert. and delete triggers are invoked as records are repaired. and delete Update. Note: The records that could be deleted include those outside the subset of records being compared. insert. specifying the ACTGRP(*CALLER) or 402 . You can use any one of the following techniques. which are listed in the preferred order: • Recreate the trigger program. When a compare is specified. the job that invokes the trigger will end abruptly. You may also see an MCH2004 message. read triggers are invoked as records are read. triggers. Deletion of records with duplicate keys is not recorded in the outfile statistics. Like triggers. use the 403 . or “set default” can cause records in other tables to be modified or deleted as a result of a repair action. a repair action may be prevented due to referential integrity constraints. a delete rule of “cascade”. Because of this. the remote CMPFILDTA job uses the run priority of the local CMPFILDTA job. Job priority When run. the run priority of either CMPFILDTA job is superseded if a CMPFILDTA class object (*CLS) exists in the installation library of the system on which the job is running. It will not be possible for CMPFILDTA repair processing to add a row to the employee table if the corresponding parent row is not present in the department table. Repairing the parent department table first may present its own problems. Otherwise. and then again for the dependant table. See the IBM Database Programming manual (SC41-5701) for more information on referential integrity. CMPFILDTA uses the priority of the local job to set the priority of the remote job. Note: Use the Change Job (CHGJOB) command on the local system to modify the run priority of the local job. the row deletion may fail if the employee table still contains records corresponding to the department to be deleted. Consider the case where a foreign key is defined between a “department” table and an “employee” table.Considerations for using the CMPFILDTA command ACTGRP(NAMED) • • • • • • Use the Update Program (UPDPRG) command to change to ACTGRP(NAMED) Disable trigger programs on the file Use the Synchronize Objects (SYNCOBJ) command rather than CMPFILDTA Use the Synchronize Data Group File Entries (SYNCDGFE) command rather than CMPFILDTA Use the Copy Active File (CPYACTF) command rather than CMPFILDTA Save and restore outside of MIMIX Referential integrity considerations Referential integrity enforcement can present complex CMPFILDTA repair scenarios. CMPFILDTA may not be able to make all repairs. or “set default”. However. so that both jobs have the same run priority. “set null”. you should use CMPFILDTA to repair parent tables before using CMPFILDTA to repair dependant tables. To set the remote job to run at a different priority than the local job. If CMPFILDTA attempts to delete a row in the department table and the delete rule for the constraint is “restrict”. The referential integrity constraint requires that records in the employee table only be permitted if the department number of the employee record corresponds to a row in the department table with the same department number. Such constraints should use a delete rule of “cascade”. “set null”. so you must issue the command once for the parent table. Note that the order you specify the tables on the CMPFILDTA command is not necessarily the order in which they will be processed. In other situations. or both. the list of candidate objects to compare is determined by the data group configuration. Table 59. the object selection parameters. To prevent this from occurring. you can use the Change TCP/IP Attributes (CHGTCPA) command to change the TCP Keep Alive (TCPKEEPALV) value so that it is lower than the network inactivity timeout value. causing the CMPFILDTA job to end. there may be an extended period of communications inactivity.Comparing file record counts and file member data Create Class (CRTCLS) command to create a *CLS object for the job you want to change. CMPFILDTA and network inactivity When the CMPFILDTA command processes large object selection lists. Specifying file members to compare The CMPFILDTA command allows you to work with physical file members only. the network timeout will terminate the communications session. including PF. You can select the files to compare by using a data group. If the period of inactivity exceeds the timeout value of any network inactivity timer in effect. PF-SRC. and PF-DTA Files of type PF-DTA Object attribute PF PF-DTA 404 . The Object attribute element on the File parameter enables you to select particular characteristics of a file. Detailed information about object selection is available in “Object selection for Compare and Synchronize commands” on page 360. CMPFILDTA supported extended attributes for *FILE objects Description Physical file types. • • By data group only: If you specify only by data group. Specifying CMPFILDTA parameter values This topic provides information about specific parameters of the CMPFILDTA command. you define a name space on the each system from which a list of candidate objects is created. By specifying *NONE for the data group and specifying file and member information on the object selection parameters. the values specified in object selection parameters act as a filter for the items defined to the data group. By object selection parameters only: You can compare file members that are not replicated by a data group. Table 59 lists the extended attributes for objects of type *FILE that are supported as values for the Object attribute element • By data group and object selection parameters: When you specify a data group name as well as values on the object selection parameters. If you specify *NO for the Process while active parameter in combination with repairing the file. File entry status: The File entry status parameter provides options for selecting members with specific statuses. Refer to the “Process while active” section. *DFT is the same as *NO. local. CMPFILDTA supported extended attributes for *FILE objects Description Files of type PF-SRC Files of type PF38. However. target. it is always best to perform active repairs during a period of low activity. The *NO option should be used when the files being compared are not actively being updated by either application activity or MIMIX replication activity. Note: *TGT and *SRC are only valid when a data group is specified. system 2. All file repairs are handled directly by CMPFILDTA. uses a mechanism that retries comparison activity until it detects no interference from active files. Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind. all file repairs are routed through the data group and require that the data group is active. or has exceeded a threshold limit. Three values are allowed on the Process while active parameter—*DFT. and *YES. CMPFILDTA allows you to select the system on which the repair should be made. or you can specify the system definition name. *NO. For efficiency’s sake. File repairs can be performed on system 1. the default value of *DFT is equivalent to *YES. Specifying *NO for the Process while active parameter is the recommended option for running in a quiesced environment. however. it assumes there is no application activity and MIMIX replication is current.Specifying CMPFILDTA parameter values Table 59. 405 . the data group apply process must be configured not to lock the files on the apply system. *YES is only allowed when a data group is specified and should be used when the files being compared are actively being updated by application activity or MIMIX replication activity. If a data group is not specified. CMPFILDTA. source. If a data group is specified. you cannot select *SRC when *YES is specified for the Process while active parameter. and PF38-DTA Files of type PF38-DTA Files of type PF38-SRC Object attribute PF-SRC PF38 PF38-DTA PF38-SRC Tips for specifying values for unique parameters The CMPFILDTA command includes several parameters that are unique among MIMIX commands. This parameter allows you to indicate whether compares should be made while file activity is taking place. Process while active: CMPFILDTA includes while-active support. PF38-SRC. This configuration can be accomplished by specifying *NO on the Lock on apply parameter of the data group definition. In this case. including members held due to error (*HLDERR). Repair on system: When you choose to repair files that do not match. including PF38. When used in combination with an active data group. For more information. To specify 1. This value allows you to compare a selected number of records at the end of all selected members. restore them to an active state. Subsetting option: The Subsetting option parameter provides a robust means by which to compare a subset of the data within members. If *ALL is specified. use *RANGE. In this situation. *ACTIVE. the Records at end of file parameter specifies how many trailing records are compared. all data within all selected files is compared. use *ENDDTA. The ASP group name is the name of the primary ASP device within the ASP group. The value *ACTIVE processes only those members that are active1. If you select *ENDDTA. *ALL. see the section titled “Records at end of file. you can compare a random sample using *ADVANCED. When *HLDERR is specified. Several options are available on this parameter: *ALL. and *HLDERR.Comparing file record counts and file member data When members in *HLDERR status are processed. use *ADVANCED. For more information. You must be running on OS V5R2 or greater to use these parameters. only member-level entries being held due to error are selected for processing.” Advanced subsetting can be used to audit your entire database over a number of days or to request that a random subset of records be compared. or *RANGE. 406 . The other options compare only a subset of the data. *ENDDTA. is primarily modified with insert operations. only recently inserted data needs to be compared. see the “Subset range” section. *ADVANCED. When a member. the value you select will determine which additional elements are used when comparing data. A data group must also be specified on the command or the parameter is ignored. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve previous behavior. The following are common scenarios in which comparing a subset of your data is preferable: • • • • If you only need to check a specific range of records. To repair members held due to error using *ALL or *HLDERR. and no additional subsetting is performed. The default value. In some instances. specify STATUS(*ACTIVE). indicates that all supported entry statuses (*ACTIVE and *HLDERR) are included in compare and repair processing. you must also specify that the repair be performed on the target system and request that active processing be used. This parameter is ignored when a data group is specified. If time does not permit a full comparison. such as a history file. *RANGE indicates that the Subset range parameter will be used to specify the subset of records to be compared. the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair members held due to error—and when possible. If you do not have time to perform a full comparison all at once but you want all data to be compared over a number of days. System 1 ASP group and System 2 ASP group: The System 1 ASP group and System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP) group where objects configured for replication may reside. Valid values for the File entry status parameter are *ALL. These elements allow you to specify a range of records to compare. and continues through until the end of the larger member. Thus.Specifying CMPFILDTA parameter values advanced subsetting select *ADVANCED. 407 . The Last record element can be specified as *LAST or as a relative record number. The ENDDTA value is always applied to the smaller of the System 1 and System 2 members. If some records are selected by both by the ENDDTA parameter and another subsetting option. records in the member are compared up to. When *NONE is specified. those records are only processed once. records in the member are compared beginning with the first record. This parameter is also valid if values other than *ENDDTA were specified in the Subsetting option. all members are compared using the same relative record number range. see “Advanced subset options for CMPFILDTA” on page 410. and including. Two elements are included. Subset range: Subset range is enabled when *RANGE is specified on the Subsetting option parameter. records at the end of the members are not compared unless they are selected by other subset criteria. using the range specification is usually only useful for a single member or a set of members with related records. Advanced subset options: The Advanced subset options (ADVSUBSET) provides the ability to use sophisticated comparison techniques. For detailed information and examples. both records at the end of the file as well as any additional subsetting options factor into the compare. Records at end of file: The Records at end of file (ENDDTA) parameter allows you to compare recently inserted data without affecting the other subsetting criteria. In this case. In the case of *LAST. First record and Last record. as described in the “Subsetting option” section. If you specified *ENDDTA in the Subsetting option parameter. In the case of *FIRST. If more than one member is selected for processing. The First record element can be specified as *FIRST or as a relative record number. only those records specified in the Records at end of file parameter will be processed. If one system has 1000 records while the other has 1100. relative records 801-1100 would be checked. For more information see “Advanced subset options for CMPFILDTA” on page 410. Let us assume that you specify 200 for the ENDDTA value. To compare particular records at the end of each member. The relative record numbers of the last 200 records of the smaller file are compared as well as the additional 100 relative record numbers due to the difference in member size. Using the Records at end of file parameter in daily processing can keep you from missing records that were inserted recently. the last record. The Records at end of file parameter can be specified as *NONE or number-ofrecords. you must specify the number of records. as indicated in the “Subsetting option” section. System to receive output The System to receive output (OUTSYS) parameter indicates the system on which the output will be created. the outfile will be updated as the database apply encounters journal entries relating to possible mismatched records. Re-issued messages will not have the original “from” and “to” program information. CMPFILDTA generates a batch immediate job to do the bulk of the processing. outfiles and files to be compared may not reside in QTEMP. you need to be aware of the following issues when a batch immediate job is generated: • • The identity of the job will be issued in a message in the original job. In this case. For additional details. Interactive and batch processing On the Submit to batch parameter. A batch immediate job is not processed through a job queue and is identified with a job type of BCI on the WRKACTJOB screen. Similarly. the original job waits for the batch immediate job to complete and re-issues any messages generated by CMPFILDTA. complete outfile information is only available if the System to receive output parameter indicates that the output file is on the data group target system. see “Report types and output formats” on page 378. the *YES default submits a multi-thread capable batch job. if CMPFILDTA is issued from a batch job whose ALWMLTTHD attribute is *NO. When *NO is specified for the parameter. and the contents of the reported differences are similar to that provided for other MIMIX compare commands. These parameters are available by pressing F10 (Additional parameters). you must view the job log of the generated job to determine this information. Canceling the interactive request will not cancel the batch immediate job. output format. The Wait time (seconds) parameter can be used to ensure that all such outfile updates are complete before the command completes. 408 . • • • Using the additional parameters The following parameters allow you to specify an additional level of detail regarding CMPFILDTA command processing. Interactive jobs are not permitted to have multiple threads. Since the batch immediate job cannot access the interactive job’s QTEMP library. By default. which are required for CMPFILDTA processing. Instead. even when CMPFILDTA is issued from a multi-thread capable batch job. In cases where a batch immediate job is generated. and type of processing The options for selecting processing method. Thus. a batch immediate job will also be spawned.Comparing file record counts and file member data Specifying the report type. output. Escape messages created prior to the final message will be converted to diagnostic messages. When Output is *OUTFILE and Process while active is *YES. the output is created on the local system. If no data group was specified. multiple threads may be utilized to improve performance. 409 . for example. the value *CALC does not calculate more than 25 thread groups. You can specify from 1 to 100 thread groups. The default. If your data group is configured for SNA. will determine the number of thread groups automatically. override the SNA configuration by specifying the name of the transfer definition on the command. The actual number of threads used in the comparison is based on the result of the formula 2x + 1. If a data group was specified. When *CALC is specified. any repair actions that have not been completed are lost. or the default *NOMAX. Continuing processing when the apply process has a large backlog may adversely affect performance of the CMPFILDTA job and its ability to compare a file with an excessive number of outstanding entries. Therefore. The number of threads used during setup will not exceed the total number of threads used for primary compare processing. CMPFILDTA processing waits the specified time for all pending compare operations processed through the MIMIX replication path to complete. During active processing. *NOMAX should only be used in exceptional circumstances. the default uses the transfer definition associated with the data group. To maximize processing efficiency. the *NOMAX default is highly recommended. The value *NOMAX allows the compare and repair action to continue even when the database apply threshold has been reached. If active processing is enabled and a wait time is specified. In most cases. *CALC. Wait time (seconds): The Wait time (seconds) value is only valid when active processing is in effect and specifies the amount of time to wait for active processing to complete. depending on the number of members selected for processing. Number of thread groups: The Number of thread groups parameter indicates how many thread groups should be used to perform the comparison. where x is the value specified or the value calculated internally as the result of specifying *CALC. You can specify from 0 to 3600 seconds. If you increase the number of thread groups in order to reduce processing time. The default value *END stops the requested compare and repair action when the database apply threshold is reached. only one thread will be used. When using this parameter. The CMPFILDTA command requires that you have a TCP/IP transfer definition for communication with the remote system.Specifying CMPFILDTA parameter values Transfer definition: The default for the Transfer definition parameter is *DFT. the CMPFILDTA command displays a message showing the value calculated as the number of thread groups. the transfer definition associated with system 2 is used. DB apply threshold: The DB apply threshold parameter is only valid during active processing and requires that a data group be specified. The parameter specifies what action CMPFILDTA should take if the database apply session backlog exceeds the threshold warning value configured for the database apply process. Note: Thread groups are created for primary compare processing only. During setup. you also increase processor and memory use. it is important to balance the time required for processing against the available resources. However. The second 10 percent.Comparing file record counts and file member data Change date: The Change date parameter provides the ability to compare file members based on the date they were last changed or restored on the source system. if a member contains 1000 records on Monday. which is omitted from results when the requested report type is *DIF. Note: Exercise caution when specifying actual date and time values. A specified timestamp that is later than the start of the last audit can result in one or more file members not being compared. it is always best to compare a random representative sampling of data. to be processed on Tuesday. as the following example demonstrates. These techniques provide additional assurance that files are replicated correctly. but have the requirement to assure that all data is compared over the course of a week. However. Advanced subsetting makes it simple to accomplish this task by comparing 10 percent of your data each weeknight and completing the remaining 50 percent over the weekend. Using the advanced CMPFILDTA capability. All available dates are considered when determining whether to include or exclude a file member. The shipped default value is *ALL. let us assume you have a limited batch window. If the audit level is lower. will contain records 410 . you can divide this work over a number of days. Any member changed between the time of its last audit and the specified timestamp will not be compared and therefore cannot be reported if it is not synchronized. perhaps the member has grown to 1500 records. Members that have not been updated or restored since the specified timestamp will not be compared. The *AUDIT value can only be used if audit level *LEVEL30 was in effect at the time the last audit was performed. You do not have time to run a total compare everyday. the compare start timestamp of the #FILDTA audit is used in the determination. The Advanced subset options also provides this capability. By Tuesday. The audit level is available by displaying details for the audit (WRKAUD command). When *AUDIT is specified. For example. records 1 through 100 will be compared on Monday. The command must specify a data group when this value is used. the last changed and last restored timestamps are ignored by the decision process. the value specified for Time is ignored. Advanced subset options for CMPFILDTA You can use the Advanced subset options (ADVSUBSET) parameter on the Compare File Data (CMPFILDTA) command for advanced techniques such as comparing records over time and comparing a random sample of data. When *ALL or *AUDIT is specified for Date. For example. an error message is issued. Only members changed or restored after the specified date and time will be processed. This parameter specifies the date and time that MIMIX will use in determining whether to process a file member. These members are identified in the output by a difference indicator value of *EQ (DATE). The recommended values for this parameter are either *ALL or *AUDIT. If you wish to compare all records over the course of a few days. Advanced subset options are applied independently for each member processed. These bins are numbered from 1 up to the number specified (N). Records 101 through 150 will not get checked at all. Interleave: The Interleave factor specifies the mapping between the relative record number and the bin number. It also permits a full compare to be partitioned into multiple CMPFILDTA requests that. Records that were once assigned to bin 2 may in the future be assigned to bin 1. The Number of subsets element is the number of approximately equal-sized bins to define. It also allows a specified range of these subsets to be compared. in combination. A specific Interleave value is preferable in this case. The advanced subset function assigns the data in each member to multiple nonoverlapping subsets in one of two ways. the mapping also changes. For example: Table 60. records in each member are divided on a percentage basis. Interleave *NONE Member A on Monday Total records in member: Number of subsets (bins): Interleave: Records assigned to bin 1: Records assigned to bin 2: Records assigned to bin 3: 30 3 *NONE 1-10 11-20 21-30 Member A on Tuesday 45 3 *NONE 1-15 16-30 31-45 Note that when the total number of records in a member changes. Using bytes. The Interleave element specifies the manner in which members are assigned to a bin.Advanced subset options for CMPFILDTA 151 through 300. To use advanced subsetting. Once the last bin is 411 . If you specify *NONE. There are two approaches that can be used. assures that all data that existed at the time of the first request is compared. You must specify at least one bin. the changing mapping may cause you to miss records. Each record is assigned to one of these bins. Advanced subsetting provides you with an alternative that does not skip records when members are growing. which permits a representative sample subset of the data to be compared. the Interleave value specifies a number of contiguous records that should be assigned to each bin before moving to the next bin. you will need to identify the following: • • • The number of subsets or “bins” to define for the compare The manner in which records are assigned to bins The specific bins to process Number of subsets: The first issue to consider when performing advanced subset options is how many subsets or bins to establish. 412 . In most circumstances. Because every bin is eventually selected. The following example is based on the one provided in Table 60: Table 61. the system determines how many contiguous bytes are assigned to each bin before subsequent bytes are placed in the next bin. comparisons made over several days will compare every record that existed on the first day. This calculated value will not change due to member size changes. Interleave(20) Member A on Monday Total records in member: Record length: Number of subsets (bins): Interleave (bytes): Interleave (records): Records assigned to bin 1: 30 10 bytes 3 20 2 1-2 7-8 13-14 19-20 25-26 Member A on Tuesday 45 10 bytes 3 20 2 1-2 7-8 13-14 19-20 25-26 31-32 37-38 43-44 3-4 9-10 15-16 21-22 27-28 33-34 39-40 45 5-6 11-12 17-18 23-24 29-30 35-36 41-42 Records assigned to bin 2: 3-4 9-10 15-16 21-22 27-28 Records assigned to bin 3: 5-6 11-12 17-18 23-24 29-30 If the Interleave and Number of Subsets is constant. *CALC is recommended for the interleave specification. When you select *CALC.Comparing file record counts and file member data filled. despite the growth of member size. Let us assume you have specified in interleave value of 20 bytes. assignment restarts at the first bin. the mapping of relative record numbers to bins is maintained. you would keep the subset value and interleave factor a constant. If the number of subsets are 100. suppose you wish to compare seven percent of your data. The next parameters. If you select *FIRST. can be long running and may exceed the time which you have available for it to run. If you wish to compare a random sample. allow you to specify which bin to process. Refer to the MIMIX Monitor documentation for more information. When you specify *LAST. The following settings could be used over the course of a week to compare all of your data: Table 62. For example. Last subset has similar values. the First subset and the Last subset. seven percent of the data is compared. a smaller interleave factor provides a more random. or a rule which calls it. the first subset is 1. 413 . the highest numbered bin is the last one processed. For example. the range to compare will start with bin 1. since data in each bin is processed sequentially. and the last subset is 7. A first subset value of 21 and a last subset value of 27 would also compare seven percent of your data. To compare all your data over the course of several days. First and last subset: The First subset and Last subset values work in combination to determine a range of bins to compare. *LAST and subset-number. but vary the First and Last subset values each day. sample to compare. To compare a random sample of your data. Using First and last subset to compare data Number of subsets (bins) 100 100 100 100 100 100 100 Interleave *CALC *CALC *CALC *CALC *CALC *CALC *CALC First subset 1 11 21 31 41 51 66 Last subset 10 20 30 40 50 65 100 Percentage compared 10 10 10 10 10 15 35 Day of week Monday Tuesday Wednesday Thursday Friday Saturday Sunday Note: You can automate these tasks using MIMIX Monitor. Specifying a very small interleave factor can greatly reduce efficiency. specify a range of subsets that represent the size of the sample.Ending CMPFILDTA requests Specifying *NONE or a very large interleave factor maximizes processing efficiency. or scattered. as little sequential processing can be done before the file must be repositioned. but it would compare a different seven percent than the first example. the possible values are *FIRST and subset-number. Ending CMPFILDTA requests The Compare File Data (CMPFILDTA) command. For the First subset. specify the number of subsets and interleave factor that allows you to size each day’s workload as your needs require. the value *UN (unknown) is placed in the Difference Indicator. The output may not be accurate because the full CMPFILDTA request did not complete. 414 . The default value of 30 seconds may not be adequate in these circumstances. The report and output file contain as much information as possible with the data available at the step in progress when the job ended. Messages indicate the step within CMPFILDTA processing at which the end was requested. If processing did not complete to a point where MIMIX can accurately determine the result of the compare. The output may be incomplete if the end occurred earlier. Note: If the CMPFILDTA command has been long running or has encountered many errors. you may need to specify more time on the ENDJOB command’s Delay time.Comparing file record counts and file member data The CMPFILDTA command recognizes requests to end the job in a controlled manner (ENDJOB OPTION(*CNTRLD)). if *CNTRLD (DELAY) parameter. The content of the report and output file is most valuable if the command completed processing through the end of phase 1 compare. accept the defaults. specify *NONE and continue with the next step. verify. and security considerations described in “Considerations for using the CMPFILDTA command” on page 400. d. From the MIMIX Compare. At the File and library prompts. do one of the following: • To compare data for all files defined by the data group file entries for a particular data group definition. if the file and library names on system 2 are equal to system 1. specify the data group name and continue with the next step. At the Data group definition prompts. At the File prompts. You can optionally specify that results of the comparison are placed in an outfile. and Synchronize menu. Before you begin. Verify. specify the value you want. 3. At the Object attribute prompt.Comparing file member data . At the Member prompt. The Compare File Data (CMPFILDTA) command appears. do the following: 1. 2. For more information. see “Object selection for Compare and Synchronize commands” on page 360. b. From the MIMIX Intermediate Main Menu. specify the name of the file and library to which files on the local system are compared. At the Include or omit prompt. You can specify as many as 300 object selectors by using the + for more prompt for each selector. restrictions. At the System 2 file and System 2 library prompts. To compare a subset of files defined to a data group. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. • • 4. c. specify the name or the generic value you want. For each selector. you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3.basic procedure (non-active) Comparing file member data . accept *ALL or specify a member name to compare a particular member within a file. You should also read “Specifying CMPFILDTA parameter values” on page 404 for additional information about parameters and values that you can specify. e.basic procedure (nonactive) You can use the CMPFILDTA command to ensure that data required for replication exists on both systems and any time you need to verify that files are synchronized between systems. and synchronize menu) and press Enter. specify the data group name and skip to Step 6. see the recommendations. Otherwise. Note: The System 2 file and System 2 library values are ignored if a data 415 . select option 7 (Compare file data) and press Enter. To compare data by file name only. select option 12 (Compare. To perform a basic data comparison. do the following: a. At the System 1 ASP group prompt. Otherwise. At the Subsetting option prompt. specify the name of the remote system to which files on the local system are compared. If necessary. The System 2 parameter prompt appears if you are comparing files not defined to a data group. Press Enter. specify *ALL to select all data and to indicate that no subsetting is performed. 416 . Note: This parameter is ignored when a data group definition is specified. specify *DIF. do one of the following: • • • If you want all compared objects to be included in the report. *PRINT. accept the default if no objects from any ASP group are to be compared on system 1. 8. If you want to include the member details and relative record number (RRN) of the first 1. 7. At the File entry status prompt. • The *RRN value outputs to a unique outfile (MXCMPFILR). 10. At the Report type prompt. Otherwise. At the Process while active prompt. specify *RRN. 9. Notes: • The *RRN value can only be used when *NONE is specified for the Repair on system prompt and *OUTFILE is specified for the Output prompt. 13. do one of the following: • To generate spooled output that is printed. f. At the System 2 ASP group prompt. specify the name of the ASP group that contains objects to be compared on system 2. accept the default if no objects from any ASP group are to be compared on system 2. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired. accept the default. accept *NONE to indicate that no repair action is done. specify *ACTIVE to process only those file members that are active. Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. 5. 11. 12. specify *NO to indicate that active processing technology should not be used in the comparison. At the Repair on system prompt. 6. Press Enter and continue with the next step.000 objects that have differences. If you only want objects with detected differences to be included in the report.group is specified on the Data group definition prompts. Note: This parameter is ignored when a data group definition is specified. specify the name of the ASP group that contains objects to be compared on system 1. At the Output prompt. accept the default. At the Member to receive output prompt.basic procedure (non-active) • • • To generate an outfile and spooled output that is printed. Press Enter and continue with the next step. When used as part of shipped rules. Press Enter and skip to Step 18. 16. At the Replace or add prompt. and is the default used outside of shipped rules. At the File to receive output prompts. (Press F1 (Help) to see the name of the supplied database file.Comparing file member data . To start the comparison. 417 . specify the system on which the output should be created. At the Submit to batch prompt. At the Job description and Library prompts. specify the file and library to receive the output. At the Object difference messages prompt. The value *INCLUDE places detail messages in the job log. specify whether you want detail messages placed in the job log. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt. do the following: a. 20. To generate an outfile. specify whether new records should replace existing file members or be added to the existing list. 14. 21. To submit the job for batch processing. specify *OUTFILE. 19. 17. If you do not want to generate output. you must select *SYS2 for the System to receive output prompt. b. Press Enter and continue with the next step. do one of the following: • • If you do not want to submit the job for batch processing. At the Output member options prompts. specify the name and library of the job description used to submit the batch request. accept the default. specify *BOTH. specify *NONE. Press Enter continue with the next step. At the Job name prompt. accept *CMD to use the command name to identify the job or specify a simple name. 18. specify the name of the database file member to receive the output of the command. press Enter.) 15. the default value is *OMIT since the results are already placed in an outfile. specify *NO and press Enter to start the comparison. At the System to receive output prompt. 2. To compare data by file name only. e. For more information. specify the name or the generic value you want. accept the defaults. accept *ALL or specify a member name to compare a particular member within a file. do the following: 1. do one of the following: • To compare data for all files defined by the data group file entries for a particular data group definition. and synchronize menu) and press Enter. Press Enter. You should also read “Specifying CMPFILDTA parameter values” on page 404 for additional information about parameters and values that you can specify. you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3.Comparing and repairing file member data . At the File prompts. do the following: a.basic procedure You can use the CMPFILDTA command to repair data on the local or remote system. At the File and library prompts. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. At the Member prompt. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. c. Otherwise. 3. restrictions. 418 . To compare and repair data. and security considerations described in “Considerations for using the CMPFILDTA command” on page 400. Verify. From the MIMIX Intermediate Main Menu. specify *NONE and continue with the next step. To compare a subset of files defined to a data group. At the Object attribute prompt. see the recommendations. b. specify the data group name and skip to Step 6. At the Include or omit prompt. verify. specify the name of the file and library to which files on the local system are compared. specify the value you want. d. select option 7 (Compare file data) and press Enter. Before you begin. You can specify as many as 300 object selectors by using the + for more prompt for each selector. • • 4. At the System 2 file and System 2 library prompts. For each selector. select option 12 (Compare. The Compare File Data (CMPFILDTA) command appears. if the file and library names on system 2 are equal to system 1. From the MIMIX Compare. and Synchronize menu. At the Data group definition prompts. see “Object selection for Compare and Synchronize commands” on page 360. specify the data group name and continue with the next step. f. If you do not want to generate output. *LOCAL. specify the file and library to receive the output. do one of the following: • • If you want all compared objects to be included in the report. 11. 6. do one of the following: • • • • To generate spooled output that is printed. The System 2 parameter prompt appears if you are comparing files not defined to a data group. *PRINT.basic procedure 5. 9. To generate an outfile. *TGT. accept the default if no objects from any ASP group are to be compared on system 1. 7. accept the default if no objects from any ASP group are to be compared on system 2. specify *BOTH. specify *DIF. Otherwise. specify *ACTIVE to process only those file members that are active. 12. Note: This parameter is ignored when a data group definition is specified. 8. At the Report type prompt. At the System 2 ASP group prompt. Press Enter and continue with the next step. At the Process while active prompt. specify *NONE. If you only want objects with detected differences to be included in the report. specify *ALL to select all data and to indicate that no subsetting is performed. At the Subsetting option prompt. or the system definition name to indicate the system on which repair action should be performed. (Press F1 (Help) to see the name of the supplied database file. At the Output member options prompts.) 15. do the following: 419 . Otherwise. specify *OUTFILE. 10. Press Enter and skip to Step 18. Note: *TGT and *SRC are only valid if you are comparing files defined to a data group.Comparing and repairing file member data . accept the default. At the File entry status prompt. Press Enter and continue with the next step. Press Enter and continue with the next step. specify *SYS1. specify the name of the ASP group that contains objects to be compared on system 1. specify the name of the ASP group that contains objects to be compared on system 2. At the File to receive output prompts. 13. At the System 1 ASP group prompt. *SYS2. At the Output prompt. If necessary. 14. *SRC. At the Repair on system prompt. To generate an outfile and spooled output that is printed. *SRC is not valid if active processing is in effect. specify *NO to indicate that active processing technology should not be used in the comparison. Note: This parameter is ignored when a data group definition is specified. accept the default. specify the name of the remote system to which files on the local system are compared. At the Member to receive output prompt. specify *NO and press Enter to start the comparison. accept the default. At the Object difference messages prompt. do one of the following: • • If you do not want to submit the job for batch processing. and is the default used outside of shipped rules. 16. When used as part of shipped rules. At the Replace or add prompt. specify the system on which the output should be created. 17. Press Enter. press Enter. specify whether you want detail messages placed in the job log. To submit the job for batch processing. The value *INCLUDE places detail messages in the job log. 18. 20. 19. 420 .a. specify the name of the database file member to receive the output of the command. you must select *SYS2 for the System to receive output prompt. At the Job name prompt. At the System to receive output prompt. 21. At the Job description and Library prompts. the default value is *OMIT since the results are already placed in an outfile. specify whether new records should replace existing file members or be added to the existing list. b. specify the name and library of the job description used to submit the batch request. accept *CMD to use the command name to identify the job or specify a simple name. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt. At the Submit to batch prompt. To start the comparison. see “Object selection for Compare and Synchronize commands” on page 360. verify. do the following: 1. Note: The System 2 file and System 2 library values are ignored when a data group is specified on the Data group definition prompts. At the Member prompt. Verify. You should also read “Specifying CMPFILDTA parameter values” on page 404 for additional information about parameters and values that you can specify. The Compare File Data (CMPFILDTA) command appears. d. You can specify as many as 300 object selectors by using the + for more prompt for each selector. restore them to an active state. see the recommendations. you must specify a data group name. Note: If you want to compare data for all files defined by the data group file entries for a particular data group definition. select option 12 (Compare.members on hold (*HLDERR) Comparing and repairing file member data . Press Enter. From the MIMIX Intermediate Main Menu. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. From the MIMIX Compare. b. 421 .members on hold (*HLDERR) Members that are being held due to error (*HLDERR) can be repaired with the Compare File Data (CMPFILDTA) command during active processing. For more information. skip to Step 5. 3. When members in *HLDERR status are processed. you can optionally specify elements for one or more object selectors that act as filters to the files defined to the data group indicated in Step 3. if significant activity has occurred on the source system that has not been replicated on the target system. restrictions. do the following: a. At the Data group definition prompts. and security considerations described in “Considerations for using the CMPFILDTA command” on page 400. As such. For each selector. this method is generally faster than other methods of repairing members in *HLDERR status that transmit the entire member or file. To repair a member with a status of *HLDERR. the CMPFILDTA command works cooperatively with the database apply (DBAPY) process to compare and repair the members—and when possible. c. At the File prompts. specify the name or the generic value you want. and Synchronize menu. specify the value you want. At the Object attribute prompt. select option 7 (Compare file data) and press Enter. At the Include or omit prompt. e. However. Before you begin. 2. At the File and library prompts. and synchronize menu) and press Enter.Comparing and repairing file member data . 4. it may be faster to synchronize the member using the Synchronize Data Group File Entry (SYNCDGFE) command. accept *ALL or specify a member name to compare a particular member within a file. The following procedure repairs a member without transmitting the entire member. b. 14.5. do one of the following: • If you do not want to submit the job for batch processing. At the Replace or add prompt. At the Member to receive output prompt. To generate an outfile and spooled output that is printed. 10. specify *NO and press Enter to start the comparison. accept the default. To generate an outfile. specify *TGT to indicate that repair action be performed on the target system. At the Output prompt. Note: This parameter is ignored when a data group definition is specified. At the System 1 ASP group prompt. Otherwise. 13.) 12. specify the name of the ASP group that contains objects to be compared on system 2. accept the default if no objects from any ASP group are to be compared on system 1. Otherwise. specify whether new records should replace existing file members or be added to the existing list. At the Submit to batch prompt. At the Output member options prompts. specify whether you want detail messages placed in the job log. At the Process while active prompt. Press Enter and skip to Step 15. Press Enter and continue with the next step. 15. specify the file and library to receive the output. At the System to receive output prompt. do the following: a. At the File to receive output prompts. At the Object difference messages prompt. At the Repair on system prompt. the default value is *OMIT since the results are already placed in an outfile. At the System 2 ASP group prompt. specify *OUTFILE. At the File entry status prompt. 8. (Press F1 (Help) to see the name of the supplied database file. Note: This parameter is ignored when a data group definition is specified. specify the name of the database file member to receive the output of the command. specify *HLDERR to process members being held due to error only. specify *BOTH. The value *INCLUDE places detail messages in the job log. 11. Press Enter and continue with the next step. specify *YES to indicate that active processing technology should be used in the comparison. specify *NONE. 7. specify the system on which the output should be created. *PRINT. 6. Press Enter and continue with the next step. When used as part of shipped rules. If you do not want to generate output. accept the default if no objects from any ASP group are to be compared on system 2. do one of the following: • • • • To generate spooled output that is printed. specify the name of the ASP group that contains objects to be compared on system 1. 422 . 9. and is the default used outside of shipped rules. accept *CMD to use the command name to identify the job or specify a simple name. 423 . Press Enter. 18. 17. accept the default. 16. At the Job name prompt.members on hold (*HLDERR) • To submit the job for batch processing. To compare and repair the file.Comparing and repairing file member data . press Enter. specify the name and library of the job description used to submit the batch request. At the Job description and Library prompts. and security considerations described in “Considerations for using the CMPFILDTA command” on page 400. At the System 2 file and System 2 library prompts. select option 12 (Compare. To compare data using the active processing. or has exceeded a threshold limit. You should also read “Specifying CMPFILDTA parameter values” on page 404 for additional information about parameters and values that you can specify. Verify. specify *YES or *DFT to indicate that active 424 . At the Member prompt. and Synchronize menu. verify. e. accept *ALL or specify a member name to compare a particular member within a file. 6. To compare a subset of files defined to a data group. and synchronize menu) and press Enter. specify *TGT to indicate that repair action be performed on the target system of the data group. select option 7 (Compare file data) and press Enter. see “Object selection for Compare and Synchronize commands” on page 360. see the recommendations. At the Object attribute prompt. At the File and library prompts. do one of the following: • To compare data for all files defined by the data group file entries for a particular data group definition. At the File prompts. Press Enter. From the MIMIX Intermediate Main Menu. Before you begin. specify the value you want. d. specify the data group name and continue with the next step. At the Repair on system prompt. you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. 2. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. At the Process while active prompt. c. At the Data group definition prompts. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information. • 4. b. 3. specify the data group name and skip to Step 6. From the MIMIX Compare. At the Include or omit prompt. Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind. f. 5. do the following: 1. For each selector. accept the defaults. restrictions. specify the name or the generic value you want. The Compare File Data (CMPFILDTA) command appears. do the following: a.Comparing file member data using active processing technology You can set the CMPFILDTA command to use active processing technology when a data group is specified on the command. specify the system on which the output should be created. 13. At the System 1 ASP group prompt. At the Subsetting option prompt. do one of the following: • • If you want all compared objects to be included in the report. If you do not want to generate output. *PRINT. 10. At the Object difference messages prompt.) 14. Note: This parameter is ignored when a data group definition is specified. At the File to receive output prompts. Note: This parameter is ignored when a data group definition is specified. At the File entry status prompt. To generate an outfile. 15. At the System to receive output prompt. 12. At the Member to receive output prompt. Press Enter and skip to Step 17. accept the default. (Press F1 (Help) to see the name of the supplied database file. accept the default if no objects from any ASP group are to be compared on system 2. Press Enter and continue with the next step. specify the name of the ASP group that contains objects to be compared on system 2. To generate an outfile and spooled output that is printed. 11. Otherwise. At the Output member options prompts. At the Output prompt. it is recommended that you select *SYS2 for the System to receive output prompt. specify whether new records should replace existing file members or be added to the existing list. If you only want objects with detected differences to be included in the report. 7. At the System 2 ASP group prompt. do one of the following: • • • • To generate spooled output that is printed. 9. At the Replace or add prompt.Comparing file member data using active processing technology processing technology be used in the comparison. accept the default. specify *ALL to select all data and to indicate that no subsetting is performed. At the Report type prompt. Note: If *OUTFILE was specified on the Outfile prompt. *DFT will render the same results as *YES. specify *DIF. specify the file and library to receive the output. specify *NONE. specify whether you want detail 425 . b. specify *ACTIVE to process only those file members that are active. Since a data group is specified on the Data group definition prompts. accept the default if no objects from any ASP group are to be compared on system 1. specify *OUTFILE. specify *BOTH. do the following: a. specify the name of the database file member to receive the output of the command. specify the name of the ASP group that contains objects to be compared on system 1. Press Enter and continue with the next step. 8. Otherwise. Press Enter and continue with the next step. 16. and is the default used when the command is invoked from outside of shipped audits. 19. 17. specify *NO and press Enter to start the comparison. 20. press Enter. Press Enter continue with the next step. accept the default. At the Submit to batch prompt. When used as part of shipped audits. 426 . 18. do one of the following: • • If you do not want to submit the job for batch processing. To submit the job for batch processing. the default value is *OMIT since the results are already placed in an outfile. At the Job description and Library prompts. At the Job name prompt.messages placed in the job log. accept *CMD to use the command name to identify the job or specify a simple name. To start the comparison. specify the name and library of the job description used to submit the batch request. The value *INCLUDE places detail messages in the job log. d. You should also read “Specifying CMPFILDTA parameter values” on page 404 for additional information about parameters and values that you can specify. or has exceeded a threshold limit. you can specify elements for one or more object selectors that either identify files to compare or that act as filters to the files defined to the data group indicated in Step 3. Note: Do not compare data using active processing technology if the apply process is 180 seconds or more behind. see the recommendations. At the Data group definition prompts. b. specify the data group name and continue with the next step. specify the name of the file and library to which files on the local system are compared. The Compare File Data (CMPFILDTA) command appears. To compare data using the subsetting options. and security considerations described in “Considerations for using the CMPFILDTA command” on page 400. e. • • 4. specify the value you want. and synchronize menu) and press Enter. specify the data group name and skip to Step 6. At the System 2 file and System 2 library prompts. For more information. At the File and library prompts. To compare a subset of files defined to a data group. restrictions. accept *ALL or specify a member name to compare a particular member within a file. At the Object attribute prompt. 2. specify the name or the generic value you want. verify. if the file and library names on system 2 are equal to system 1. Otherwise. To compare data by file name only. 427 . You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector. accept *ALL to compare the entire list of supported attributes or press F4 to see a valid list of attributes. At the File prompts. select option 7 (Compare file data) and press Enter. select option 12 (Compare. see “Object selection for Compare and Synchronize commands” on page 360. At the Member prompt. do the following: a. 3. Note: The System 2 file and System 2 library values are ignored if a data group is specified on the Data group definition prompts. Verify. From the MIMIX Compare. accept the defaults.Comparing file member data using subsetting options Comparing file member data using subsetting options You can use the CMPFILDTA command to audit your entire database over a number of days. specify *NONE and continue with the next step. c. From the MIMIX Intermediate Main Menu. do one of the following: • To compare data for all files defined by the data group file entries for a particular data group definition. and Synchronize menu. At the Include or omit prompt. Before you begin. do the following: 1. c. b. Skip to Step 10. At the Process while active prompt. To process members being held due to error only. specify *HLDERR. Notes: • To process members in *HLDERR status. 8. See Step 8. Do one of the following: a. 7. or has exceeded a threshold limit. The System 2 parameter prompt appears if you are comparing files not defined to a data group. To process both active members and members being held due to error (*ACTIVE and *HLDERR). Skip to Step 11. you can select files with specific statuses for compare and repair processing. 6. do the following: a. specify *ENDDTA and press Enter to see additional prompts. 9. At the Subset range prompts. To define how many subsets should be established. Do one of the following: • • To compare a fixed range of data. At the File entry status prompt. you must specify *TGT. Press Enter. specify the name of the remote system to which files on the local system are compared. specify a value if you want repair action performed. • If you are comparing files associated with a data group. To indicate that only data specified on the Records at end of file prompt is compared. • 10. specify *RANGE then press Enter to see additional prompts. specify the default value *ALL. To process active members only. *DFT does not use active processing. Skip to Step 12.f. specify the relative record number of the first record to compare in the range. At the Repair on system prompt. how member data is assigned to the subsets. specify *ADVANCED and press Enter to see additional prompts. 428 . If necessary. Note: When *ALL or *HLDERR is specified for the File entry status prompt. See Step 8. specify *ACTIVE. Note: To process members in *HLDERR status. you must specify a value other than *ALL to use additional subsetting. At the First record prompt. • Do not compare data using active processing technology if the apply process is 180 seconds or more behind. *TGT must also be specified for the Repair on system prompt (Step 6) and *YES must be specified for the Process while active prompt (Step 7). At the Subsetting option prompt. *DFT uses active processing. you must specify *YES. and which range of subsets to compare. specify whether active processing technology should be used in the comparison. If you are comparing files not associated with a data group. 5. c. do one of the following: • • • To generate spooled output that is printed.Comparing file member data using subsetting options b. To generate an outfile and spooled output that is printed. specify *DIF. accept the default. At the Last subset prompt. Press Enter and continue with the next step. Specifying *RRN can help resolve situations where a discrepancy is known to exist but you are unsure which system contains the correct data. At the Output prompt. • The *RRN value outputs to a unique outfile (MXCMPFILR). 13. b. accept the default. Press Enter and skip to Step 19. *PRINT. 14. the default *CALC is highly recommended. specify the relative record number of the last record to compare in the range. you must specify a value other than *NONE. c. At the Interleave prompt. Press Enter and continue with the next step. Skip to Step 12. If you want to include the member details and relative record number (RRN) of the first 1. In most cases. Notes: • The *RRN value can only be used when *NONE is specified for the Repair on system prompt and *OUTFILE is specified for the Output prompt. specify *BOTH. specify *NONE. At Number of subsets prompt. Subsets are numbered beginning with 1. specify the first subset in the sequence of subsets to compare. specify the last subset in the sequence of subsets to compare. If you do not want to generate output. 11.000 objects that have differences. specify the number of approximately equalsized subsets to establish. At the Records at end of file prompt. d. Note: If *ENDDTA is specified on the Subsetting option prompt. If you only want objects with detected differences to be included in the report. do the following: a. At the Last record prompt. At the First subset prompt. specify the interleave factor. specify *RRN. These records are compared regardless of other subsetting criteria. 12. This value provides the information that enables you to display the specific records on the two systems and determine the system on which the file should be repaired. 429 . do one of the following: • • • If you want all compared objects to be included in the report. At the Report type prompt. specify the number of records at the end of the member to compare. At the Advanced subset options prompts. When used as part of shipped rules. At the Job name prompt. Press Enter continue with the next step. 430 . accept *CMD to use the command name to identify the job or specify a simple name. specify whether new records should replace existing file members or be added to the existing list. press Enter. To submit the job for batch processing. At the Object difference messages prompt.• To generate an outfile. Note: If *YES is specified on the Process while active prompt and *OUTFILE was specified on the Outfile prompt. 18. At the Submit to batch prompt. specify the file and library to receive the output. specify whether you want detail messages placed in the job log. 20. At the Job description and Library prompts. specify the system on which the output should be created. specify the name of the database file member to receive the output of the command. 19. At the File to receive output prompts. the default value is *OMIT since the results are already placed in an outfile. 21. accept the default. At the Member to receive output prompt. you must select *SYS2 for the System to receive output prompt. At the System to receive output prompt. 15. do the following: a. specify *OUTFILE. 17. specify *NO and press Enter to start the comparison.) 16. 22. do one of the following: • • If you do not want to submit the job for batch processing. To start the comparison. specify the name and library of the job description used to submit the batch request. (Press F1 (Help) to see the name of the supplied database file. and is the default used outside of shipped rules. b. At the Output member options prompts. At the Replace or add prompt. The value *INCLUDE places detail messages in the job log. Press Enter and continue with the next step. The automatic recovery features of MIMIX® AutoGuard™ also use synchronize commands to recover differences detected during replication and audits. For information about initial synchronization. and DLOs. the objects must be synchronized. Additionally. see the MIMIX for IBM WebSphere MQ book. • • • • During initial configuration of a data group. how large objects are handled. and Synchronize DLO (SYNCDLO) provide robust support in MIMIX environments. as well as their associated object authorities. If automatic recovery policies are disabled. these commands are often called by other functions. or by using the Synchronize Data Group (SYNCDG) command. You may also need to synchronize a file or object if an error occurs that causes the two systems to become not synchronized. The data that MIMIX replicates must by synchronized on several occasions. Topic “Considerations for synchronizing using MIMIX commands” on page 433 describes subjects that apply to more than one group of commands. and how user profiles are addressed. see these topics: • • “Performing the initial synchronization” on page 442 describes how to establish a synchronization point and identifies other key information. you need to ensure that the data to be replicated is synchronized between both systems defined in a data group. for synchronizing library-based objects. The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups and uses the auditing and automatic recovery support provided by MIMIX AutoGuard.CHAPTER 20 Synchronizing data between systems This chapter contains information about support provided by MIMIX commands for synchronizing data between two systems. The command can be long-running. such as the maximum size of an object that can be synchronized. If you change the configuration of a data group to add new data group entries. Environments using MIMIX support for IBM WebSphere MQ have additional requirements for the initial synchronization of replicated queue managers. you may need to use synchronize commands to correct a file or object in error or to correct differences detected by audits or compare commands. Each command has considerable flexibility for selecting objects associated with or independent of a data group. such as by the automatic recovery features of MIMIX AutoGuard and by options to synchronize 431 . Synchronize commands: The commands Synchronize Object (SYNCOBJ). Synchronize IFS Object (SYNCIFS). IFS objects. The synchronize commands provided with MIMIX can be loosely grouped by common characteristics and the level of function they provide. Initial synchronization: Initial synchronization can be performed manually with a variety of MIMIX and IBM commands. For more information. Additional options provide the means to address triggers. the best approach is to use the options provided on the displays where they are appropriate to use. see “About synchronizing file entries (SYNCDGFE command)” on page 439. For additional information. Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE) command provides the means to synchronize database files associated with a data group by data group file entries. and related files. path. or directory. The options call the appropriate command and. see “About synchronizing data group activity entries (SYNCDGACTE)” on page 438. Typically. see: • • “About MIMIX commands for synchronizing objects. and Send Network DLO (SNDNETDLO) commands support fewer usage options and usability benefits than the Synchronize commands. pre-select some of the fields. Send Network commands: The Send Network Object (SNDNETOBJ). The contents of the object and its attributes and authorities are synchronized. These commands do not support synchronizing based on a data group name. For more information about this command. when you need to synchronize individual items in your configuration. Verify. The following procedures are included: • • • • • • • • • “Synchronizing database files” on page 449 “Synchronizing objects” on page 451 “Synchronizing IFS objects” on page 455 “Synchronizing DLOs” on page 459 “Synchronizing data group activity entries” on page 462 “Synchronizing tracking entries” on page 464 “Sending library-based objects” on page 465 “Sending IFS objects” on page 467 “Sending DLO objects” on page 468 432 . IFS objects. For additional information. respectively. logical files.Synchronizing data between systems objects identified in tracking entries used with advanced journaling. Procedures: The procedures in this chapter are for commands that are accessible from the MIMIX Compare. These commands may require multiple invocations per library. and DLOs” on page 437 “About synchronizing tracking entries” on page 441 Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry (SYNCDGACTE) command provides the ability to synchronize library-based objects. in many cases. and DLOs that are associated with data group activity entries which have specific status values. referential constraints. and Synchronize menu. Send Network IFS Object (SNDNETIFS). IFS objects. the policies in effect determine the value used for the command’s MAXSIZE parameter.Considerations for synchronizing using MIMIX commands Considerations for synchronizing using MIMIX commands For discussion purposes. the synchronize commands are grouped as follows: • • • Synchronize commands (SYNCOBJ. Before you synchronize you should be aware of information in the following topics: • • • • • “Limiting the maximum sending size” on page 433 “Synchronizing user profiles” on page 433 “Synchronizing large files and objects” on page 435 “Status changes caused by synchronizing” on page 435 “Synchronizing objects in an independent ASP” on page 436 Limiting the maximum sending size The Synchronize commands (SYNCOBJ. or AUDRCY). Synchronizing user profiles User profile objects (*USRPRF) can be synchronized explicitly or implicitly. When any of the automatic recovery policies are enabled (DBRCY. You can also specify the value *TFRDFN to use the threshold size from the transfer definition associated with the data group1.999 megabytes (MB). SYNCIFS. or specify a value between 1 and 9. OBJRCY. You can adjust the SYNCTHLD policy value for the installation or optionally set a value for a specific data group. and SYNCDLO) Synchronize Data Group Activity Entry (SYNCDGACTE) Synchronize Data Group File Entry (SYNCDGFE) The following subtopics apply to more than one group of commands. specify *TFRDFN. The Set MIMIX Policies (SETMMXPCY) command sets policies for automatic recovery actions and for the synchronize threshold used by the commands MIMIX invokes to perform recovery actions. and SYNCDLO) and the Send Network Objects (SNDNETOBJ) command can synchronize user profiles either 1. When automatic recovery actions initiate a Synchronize or SYNCDGFE command. By default. SYNCIFS. SYNCIFS. threshold size (SYNCTHLD) policy is used for the MAXSIZE value on the command. The Synchronize commands (SYNCOBJ.999. the value of the Sync. 433 . To preserve behavior prior to changes made in V4R4 service pack SPC05SP4. and SYNCDLO) and the Synchronize Data Group File Entry (SYNCDGFE) command provide the ability to limit the size of files or objects transmitted during synchronization with the Maximum sending size (MAXSIZE) parameter. On the SYNCDGFE command. the value *TFRDFN is only allowed when the Sending mode (METHOD) parameter specifies *SAVRST. no maximum value is specified. and user profiles that have private authorities to an object are implicitly synchronized. • • • Synchronizing user profiles with the SNDNETOBJ command The Send Network Objects (SNDNETOBJ) command explicitly synchronizes user profiles when you specify *USRPRF for the object type on the command. The status of the user profile on the target system is set to *DISABLED. 434 . is specified on these commands. and SYNCDLO commands implicitly synchronize user profiles associated with the object if they do not exist on the target system. it is synchronized and its status on the target system is set to *DISABLED. The status of the user profile on the target system is set to match the value from the data group object entry. the owning user profile. the SYNCOBJ. the following occurs: – If the user profile exists on the target system. its status on the target system remains unchanged. When the Synchronize command specifies a data group and that data group does not have a data group object entry for the user profile. the object and the user profile are synchronized. Although only the requested object type. such as *PGM. the object and the associated user profile are synchronized. If a data group object entry excludes the user profile from replication. the object is synchronized and its owner is changed to the default owner indicated in the data group definition. The following information describes slight variations in processing. – If the user profile does not exist on the target system. If the user profile does not exist on the target system. it is synchronized and its status on the target system is set to *DISABLED. the primary group profile. If you specified a user profile but did not specify a data group.implicitly or explicitly. The status of the user profile on the target system is affected as follows: • If you specified a data group and a user profile which is configured for replication. SYNCIFS. The status of the user profile on the target system is affected as follows: • • If the user profile exists on the target system. The user profile is not synchronized. When synchronizing other object types. Synchronizing user profiles with SYNCnnn commands The SYNCOBJ command explicitly synchronizes user profiles when you specify *USRPRF for the object type on the command. its status on the target system remains unchanged. the status of the user profile on the target system is the value specified in the configured data group object entry. as follows: • When the Synchronize command specifies a data group and that data group has a data group object entry which includes the user profile. 435 . large files or objects can negatively impact performance by consuming too much bandwidth. is specified on the command. MIMIX adds the system distribution directory entry for the user profile on the target system and specifies these values: • • • • • User ID: same value as retrieved from the source system Description: same value as retrieved from the source system Address: local-system name User profile: user-profile name All other directory entry fields are blank Synchronizing large files and objects When configured for advanced journaling. this command implicitly synchronizes user profiles associated with the object if they do not exist on the target system. Although only the requested object type. Certain commands for synchronizing provide the ability to limit the size of files or objects transmitted during synchronization. MIMIX automatically adds any missing system distribution directory entries for user profiles. The object and associated user profiles are synchronized. On certain commands. Status changes caused by synchronizing In some circumstances the Synchronize Data Group Activity Entry (SYNCDGACTE) command changes the status of activity entries when the command completes. the owning user profile. The status of the user profile on the target system is set to *DISABLED. large objects (LOBs) can be synchronized through the user (database) journal. If advanced journaling is not used in your environment. you may want to consider synchronizing large files or objects (over 1 GB) outside of MIMIX. The synchronize (SYNCnnn) and the SNDNETOBJ commands provide this capability. the primary group profile. The Threshold size (THLDSIZE) parameter on the transfer definition can be used to limit the size of objects transmitted with the Send Network Object commands. For additional details. such as *PGM. Missing system distribution directory entries automatically added When a missing user profile is detected during replication or synchronization of an object.Considerations for synchronizing using MIMIX commands When synchronizing other object types. If replication or a synchronization request determines that a user profile is missing on the target system and a system directory entry exists on the source system for that user profile. and user profiles that have private authorities to an object are implicitly synchronized. see “About synchronizing data group activity entries (SYNCDGACTE)” on page 438. See “Limiting the maximum sending size” on page 433 for more information. You can synchronize a database file that contains LOB data using the Synchronize Data Group File Entry (SYNCDGFE) command. it is possible to control the size of files and objects sent to another system. During traditional synchronization. and System 2 ASP device number parameters. each replicated activity has associated tracking entries. 436 . the status of the tracking entry will be updated once the data group is restarted. the status of the tracking entry will remain in its original status or have a status of *HLD. When advanced journaling is configured. the status of the tracking entry will change to *ACTIVE upon successful completion of the synchronization request. you must specify values for the System 1 ASP group or device. SYNCIFS and SYNCDLO) do not change the status of activity entries associated with the objects being synchronized. Synchronizing objects in an independent ASP When synchronizing data that is located in an independent ASP. Activity entries retain the same status after the command completes. When you use the SYNCOBJ or SYNCIFS commands to synchronize an object that has a corresponding tracking entry. • In order for the Send Network Object (SNDNETOBJ) command to access objects that are located in an independent auxiliary storage pool (ASP) on the source system. do one of the following on the Synchronize Object (SYNCOBJ) command: – Specify the data group definition. Note: The SYNCIFS command will change the status of an activity entry for an IFS object configured for advanced journaling. be aware of the following: • In order for MIMIX to access objects located in an independent ASP. If the synchronization is not successful. you must first use the IBM command Set ASP Group (SETASPGRP) on the local system before using the SNDNETOBJ command.The Synchronize commands (SYNCOBJ. If the data group is not active. – If no data group is specified. System 2 ASP device number. When using the SYNCIFS command for a data group configured for advanced journaling. the additional parameter information is used to filter the list of objects identified for the data group. You can also synchronize only the object or only the authority attributes of an object. When you use the SYNCOBJ command to synchronize only the authorities for an object and a data group name is not specified.About MIMIX commands for synchronizing objects. the local system becomes the source system and a target system must be identified. and SYNCDLO commands during off-peak usage or when the objects being synchronized are in a quiesced state reduces contention for object locks. a subset of a data group. IFS objects. see “Object selection for Compare and Synchronize commands” on page 360. 437 . IFS objects. For more information about the object selection criteria used when no data group is specified on these commands. or by specifying objects independently of a data group. The objects to be synchronized by the command are the same as those identified for replication by the data group. and DLOs The Synchronize Object (SYNCOBJ). and DLOs About MIMIX commands for synchronizing objects. You can synchronize objects whether or not the data group is active. you must specify the name of a data group to avoid overwriting the objects on the source system. Authority attributes include ownership. authorization list. the data group can be active but it should not have a backlog of unprocessed entries. if any files processed by the command are cooperatively processed and a data group that contains these files is active. When to run: Each command can be run when the data group is in an active or an inactive state. If you specify a data group as well as specify additional object information in command parameters. will synchronize the same library-based objects as those configured for replication by the data group. Identifying what to synchronize: On each command. However. The list of objects to synchronize is generated on the local system. Synchronize IFS (SYNCIFS). SYNCIFS. public and private authorities. • When you specify a data group. the object and all authority-related attributes are synchronized. primary group. • • Each command has a Synchronize authorities parameter to indicate whether authority attributes are synchronized. For example. Using the SYNCOBJ. specifying a data group on the SYNCOBJ command. and Synchronize DLO (SYNCDLO) commands provide versatility for synchronizing objects and their authority attributes. if you run these commands from a target system. the command could fail if the database apply job has a lock on these files. When no data group is specified. Where to run: The synchronize commands can be run from either system. you can identify objects to synchronize by specifying a data group. By default. its source system determines the objects to synchronize. The data group can either be active or inactive during the synchronization request. For more information. Note: From the 5250 emulator. *COMPLETED. The specific status of individual activity entries appear on the Work with DG Activity Entries display (WRKDGACTE command). delayed. The same SYNCDGACTE request will then find the next non-completed entry and synchronize it. and subsequent delayed entries). Any existing active. the SYNCDGACTE command will find the first noncompleted activity entry and synchronize it. If a lock is not obtained in the specified time. the save operation ends and the synchronization attempt fails. IFS objects. or DLOs associated with data group activity entries. 438 . its attributes. Not supported: Spooled files and cooperatively processed files are not eligible to be synchronized using the SYNCDGACTE command. see “Limiting the maximum sending size” on page 433. Activity entries whose status falls in the following categories can be synchronized: *ACTIVE. • • About synchronizing data group activity entries (SYNCDGACTE) The Synchronize Data Group Activity Entry (SYNCDGACTE) command supports the ability to synchronize library-based objects. When all activity entries are completed for the specified object. If a commit boundary is not reached in the specified time. • The Save active parameter provides the ability to save the object in an active environment using IBM's save while active support. The SYNCDGACTE request will continue to synchronize these non-completed entries until all entries for that object have been synchronized. If the item you are synchronizing has multiple activity entries with varying statuses (for example. followed by a failed entry. the object is not saved. an entry with a status of completed. The Maximum sending size (MB) parameter specifies the maximum size that an object can be in order to be synchronized. the following parameters provide additional control of the synchronization process.Additional parameters: On each command. or *FAILED. The Save active wait time parameter specifies the amount of time to wait for a commit boundary or for a lock on an object. The contents of the object. data group activity and the status category of the represented object are listed on the Work with Data Group Activity display (WRKDGACT command). and its authorities are synchronized between the source and target systems. or failed activity entries for the specified object are processed and set to ‘completed by synchronization’ (PZ) when the synchronization request completes successfully. only the status of the very last completed entry is changed from complete (CP) to ‘completed by synchronization’ (CZ). *DELAYED. when the synchronization request completes successfully. Values supported are the same as those used in related IBM commands. The content and attributes are replicated using the IBM i save and restore commands. or optionally for a data group file entry. the status of the activity entries remains either ‘pending synchronization’ (PZ) or ‘pending completion’ (PC) when the synchronization request completes. MIMIX uses save and restore operations to create the file on the target system and then uses copy active file processing to fill it with data from the file on the source system. This method allows save-while-active operations. the synchronization will fail. the status of the activity entries is set to either ‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ). The value *YES will disable triggers on the target system during synchronization. Only the physical file attributes are replicated and synchronized. Only the physical file data is replicated using MIMIX Copy Active File processing. the data group must be active when the command is used. Only the authorities for the physical file are replicated and synchronized. File attributes are not replicated using this method. The Disable triggers on file (DSBTRG) parameter specifies whether the database apply process (used for synchronization) disables triggers when processing a file. the status of the activity entries being synchronized are set to a status of ‘pending synchronization’ (PZ) and then to ‘pending completion’ (PC). if the data group is active. MIMIX refreshes its contents. This method also has the capability to save associated logical files. Table 63. If the data group is inactive. allow MIMIX to replicate trigger-generated entries and disable the triggers. If the file format is different on the target system. Active data group required: Because the SYNCDGFE command runs through a database apply job. The default value *DGFE uses data group file entry to determine whether triggers should be disabled.About synchronizing file entries (SYNCDGFE command) Status changes during to synchronization: During synchronization processing. 439 . When the synchronization request completes. If the file does not exist on the target system. when synchronizing a file with triggers you must specify *DATA as the sending mode. *ATR *AUT *SAVRST Files with triggers: The SYNCDGFE command provides the ability to optionally disable triggers during synchronization processing and enable them again when processing is complete. If configuration options for the data group. When the data group is restarted. If the file exists on the target system. *DATA Sending mode (METHOD) choices on the SYNCDGFE command. Table 63 describes the choices. Choice of what to synchronize: The Sending mode (METHOD) parameter provides granularity in specifying what is synchronized. the status of the activity entries is set to either ‘completed by synchronization’ (CZ) or to ‘failed synchronization’ (FZ). About synchronizing file entries (SYNCDGFE command) The Synchronize Data Group File Entry (SYNCDGFE) command synchronizes database files associated with a data group by data group file entries. This is the default value. The Include related (RELATED) parameter defaults to *NO. This parameter is only valid when *SAVRST is specified for the Sending mode prompt. In some environments. A physical file being synchronized cannot be name mapped if it is not in the same library as the logical file associated with it. Logical files may be mapped by using object entries. Related files are those physical files which have a relationship with the selected physical file by means of one or more join logical files.Including logical files: The Include logical files (INCLF) parameter allows you to include any attached logical files in the synchronization request. Including related files: You can optionally choose whether the synchronization request will include files related to the file specified by specifying *YES for the Include related (RELATED) parameter. Doing so will ensure the integrity of synchronization points. specifying *YES could result in a high number of files being synchronized and could potentially strain available communications and take a significant amount of time to complete. When synchronizing physical files with referential constraints. 440 . Join logical files are logical files attached to fields in two or more physical files. ensure all files in the referential constraint structure are synchronized concurrently during a time of minimal activity on the source system. Physical files with referential constraints: Physical files with referential constraints require a field in another physical file to be valid. Object tracking entries represent data areas or data queues. IFS tracking entries also track the file identifier (FID) of the object on the source and target systems. For IFS tracking entries. When the apply session receives notification that the object represented by the tracking entry is synchronized successfully. data areas. the tracking entry status changes to *ACTIVE. • Tracking entries may not exist for existing IFS objects. IFS tracking entries represent IFS objects. the data group must be active. any existing objects to be replicated from the source system must be synchronized to the target system. the option calls the Synchronize Object (SYNCOBJ) command.About synchronizing tracking entries About synchronizing tracking entries Tracking entries provide status of IFS objects. you must create them by doing one of the following: • • Change the data group IFS entry or object entry configuration as needed and end and restart the data groups . the option calls the Synchronize IFS Object (SYNCIFS) command. and data queues that are replicated using MIMIX advanced journaling. attributes. or data queues that have been configured for replication with advanced journaling since the last start of the data group. data areas. • 441 . The contents. For object tracking entries. and authorities of the item are synchronized between the source and target systems. Load tracking entries using the Load DG IFS Tracking Entries (LODDGIFSTE) or Load DG Obj Tracking Entries (LODDGOBJTE) commands. If tracking entries do not exist. Notes: • • Before starting data groups for the first time. For status changes to be effective for a tracking entry that is being synchronized. See “Loading tracking entries” on page 257. You can synchronize the object represented by a tracking entry by using the synchronize option available on the Work with DG Object Tracking Entries display or the Work with DG IFS Tracking Entries display. the size of the data.Performing the initial synchronization Ensuring that data is synchronized before you begin replication is crucial to successful replication. Establish a synchronization point Just before you start the initial synchronization. Record the new receiver names shown in the posted message. For each data group that will replicate from a user journal. Note: If you have configured or migrated a MIMIX configuration to use integrated support for IBM WebSphere MQ. “Resources for synchronizing” on page 443 identifies available options. Change the system journal receiver and record the new receiver name shown in the posted message. Table 64 shows values which identify save actions associated 442 . 5. a. the complexity of describing the data. From the source system. On a command line. type: CHGJRN JRN(QAUDJRN) JRNRCV(*GEN) 4. establish a known start point for replication by changing journal receivers. and specify a time just before the first synchronize request for hh:mm:ss in the following command: DSPJRN JRN(library/jounal_name) RCVRNG(*CURRENT) FROMTIME('mm/dd/yyyy' 'hh:mm:ss') Note: You can also specify values for the ENTTYP parameter to narrow the search. record the time at which you submit the synchronization requests as this information is needed when determining the journal location at which to initially start replication. When you synchronize the database files and objects between systems. Large IBM WebSphere MQ environments should plan to perform this during off-peak hours. Quiesce your applications before continuing with the next step. specify the date of the first synchronize request for mm/dd/yyyy. This information will be needed when starting replication. On a command line. Identify the synchronization starting point in the source user journal. do the following: 1. type: (installation-library-name)/CHGDGRCV DGDFN(data-group-name) TYPE(*DB) 3. as well as time. Specify the source user journal for library/journal_name. use the following command to change the user journal receiver. The information gathered in this procedure will be used when you start replication for the first time. How you perform the initial synchronization can be influenced by the available communications bandwidth. you must use the procedure ‘Initial synchronization for replicated queue managers’ in the MIMIX for IBM WebSphere MQ book. 2. SYNC commands: The Synchronize commands (SYNCOBJ. This should be the same name as recorded in Step 3. a synchronize request is represented by a journal entry for a save operation. Identify the synchronization starting point in the source system journal. Record the exact time and the sequence number of the journal entry associated with the first synchronize request. Page down to locate the Receiver name. SYNCIFS. Table 64. 6. SS ES. Press F10 (Display only entry details). e. QY FS. in order of preference: • IBM Save and Restore commands: IBM save and restore commands are best suited for initial synchronization and are used when performing a manual synchronization. EW QX. FW Journaled Object Type File Data Area Data Queue IFS object b. Specify the date from Step 5a for mm/dd/yyyy and specify the time from Step 5b for hh:mm:ss in the following command: DSPJRN JRN(QSYS/QAUDJRN) RCVRNG(*CURRENT) FROMTIME('mm/dd/yyyy' 'hh:mm:ss') b. While MIMIX SYNCDG. Page down to locate the Receiver name. a. d. This should be the same name as recorded in Step 2. c. and SNDNET commands can be used. d. SYNCDLO) should be your starting point. e. c. Press F10 (Display only entry details). Record the sequence number associated with the first journal entry with the specified time stamp. Common values for using ENTTYP Journal Code F E Q B Common ENTTYP Values MS. This information will be needed when starting replication. The Display Journal Entry Details display appears. Type 5 (Display entire entry) next to the entry and press Enter. Type 5 (Display entire entry) next to the entry and press Enter.Performing the initial synchronization with synchronizing. Resources for synchronizing The available choices for synchronizing are. the communications bandwidth required for the size and quantity of objects may exceed capacity. Typically. These commands provide significantly • 443 . SYNC. The Display Journal Entry Details display appears. Entries display and the Work with DG Obj. This command provides the ability to choose between MIMIX copy active file processing and save/restore processing and provides choices for handling trigger programs during synchronization. You can also use the Synchronize Data Group File Entry (SYNCDGFE) to synchronize database files and members. This command can be long-running. The SYNDG command utilizes the auditing and automatic recovery functions of MIMIX® AutoGuard™ to synchronize an enabled data group between the source system and the target system. it may not be appropriate for some environments. The SYNCDG command can only be run on the management system. You can also use options to synchronize objects associated with tracking entries from the Work with DG IFS Trk. and only one instance of the command per data group can be running at any time. This command submits a batch program that can run for several days. SYNCOBJ procedures for data areas and data queues. The initial synchronization ensures that data is the same on each system and reduces the time and complexity involved with starting replication for the first time. SNDNET commands: The Send Network commands (SNDNETIFS. SNDNETDLO. • This chapter (“Synchronizing data between systems” on page 431) includes additional information about the MIMIX SYNC and SNDNET commands. The SYNCDG command synchronizes by using the auditing and automatic recovery support provided by MIMIX AutoGuard. and SYNCDGFE procedures for files containing LOB data. respectively. 444 . These commands may require multiple invocations per path. or library. follow the SYNCIFS procedures for IFS objects. folder. The SYNCDG command is intended to be used for initial synchronization of a data group and can be used in other situations where data groups are not synchronized.more flexibility in object selection and also provide the ability to synchronize object authorities. Entries display. SNDNETOBJ) support fewer options for selecting and specifying multiple objects and do not provide a way to specify by data group. If you have configured or migrated to integrated advanced journaling. Because this command requires that journaling and data group replication processes be started before synchronization starts. Using SYNCDG to perform the initial synchronization This topic describes the procedure for performing the initial synchronization using the Synchronize Data Group (SYNCDG) command prior to beginning replication. The SYNCDG command can be performed automatically through MIMIX IntelliStart. you can synchronize the data defined by its data group entries. By specifying a data group on any of these commands. • SYNCDG command: The SYNCDG command is intended especially for performing the initial synchronization of one or more data groups by MIMIX IntelliStart™. Trk. Click Advanced and specify the following values by pressing F4 for valid options on each parameter or use the drop-down menu: 445 . select Start All from the Action drop-down.select the system for which you want to perform the initial synchronization. Ensure the following conditions are met for each data group that you want to synchronize. b. Collector services has been started. Systems . Log in to Support Central and access the Technical Documents page for a list of required and recommended IBM PTFs. From the upper portion of the Data Groups Status window. before running this command: • Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they pertain to your environment.select the installation for which you want to perform the initial synchronization. Details . This feature is in use if a recovery window is configured or when a recovery point is set for a data group. From the Details section of the navigation bar. • • • • • While the synchronization is in progress. Also. Select the following from the navigation bar: a. select Command History. MIMIX Availability Manager displays initialization mode on the Audit Summary and Compliance interfaces while running this command if the data group definition (DGDFN) specifies *ALL. The Start Data Groups window appears.Using SYNCDG to perform the initial synchronization Note: The SYNCDG command will not process a request to synchronize a data group that is currently using the MIMIX CDP™ feature. 4. The user ID submitting the SYNCDG has *MGT authority in product level security if it is enabled for the installation. Installations . The MIMIX CDP feature may not protect data under these circumstances. Journaling is started on the source system for everything defined to the data group. To perform the initial synchronization using the SYNCDG command defaults From MIMIX Availability Manager. The Synchronize Data Group (SYNCDG) command prompt opens. All replication processes are active. 5. c. No other audits (comparisons or recoveries) are in progress when the SYNCDG is requested. In the Command History window type SYNCDG and click on the Prompt button. do not configure a recovery window or set a recovery point if a SYNCDG request is in progress for the data group. other audits for the data group are prevented from running. 6. do the following: 1. 2. Accept the defaults and click OK.select Data Groups. 3. • Data group definition (DGDFN). Type the command SYNCDG and press Enter. • Job description (JOBD). 8. do the following: 1. From a 5250 emulator. Verify your configuration is using MIMIX AutoGuard. 4. See “Verifying the initial synchronization” on page 447 for more information. Audits automatically check for and attempt to correct differences found between the source system and the target system. Specify the following values. See “Verifying the initial synchronization” on page 447 for more information. • Job description (JOBD). Click on OK to perform the initial synchronization. Verify your configuration is using MIMIX AutoGuard. 446 . pressing F4 for valid options on each parameter: • Data group definition (DGDFN). 3. Press Enter to perform the initial synchronization. 7. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. This step includes performing audits to verify that journaling and other aspects of your environment are ready to use. Audits automatically check for and attempt to correct differences found between the source system and the target system. Use the command STRDG DGDFN(*ALL) 2. Use F18 (Subset) to subset the audits by the name of the rule you want to run. If your configuration permits user journal replication of IFS objects. Do the following: a. shown in Table 65. you need to perform a subset of the available audits for each data group in a specific order. Enter the following command: (installation-library-name)/DSPDGSTS DGDFN(data-group-name) VIEW(*DBFETE) On the File and Tracking Entry Status display. Audits will detect potential problems with synchronization and attempt to automatically recover differences found. c. enter the following command: (installation-library-name)/WRKAUD 3. the Tracking Entries columns provide similar information. When verifying an initial configuration. You should not use this procedure if you have already synchronized your systems using the Synchronize Data Group (SYNCDG) command or the automatic synchronization method in MIMIX IntelliStart. Verify that data is synchronized between systems. To change the number of active audits at any one time. Shipped policy settings for MIMIX allow audits to automatically attempt recovery actions for any problems they detect. 2. • Do the following: 1.Verifying the initial synchronization Verifying the initial synchronization This procedure uses MIMIX AutoGuard™ to ensure your environment is ready to start replication. enter the following command: CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(*NOMAX) b. To access the audits. The audits used in this procedure will: • Verify that journaling is started on the source and target systems for the items you identified in the deployed replication patterns. data areas. Use MIMIX AutoGuard to audit your environment. Each audit listed on the Work with Audits display is a unique combination of data group and MIMIX rule. the File Entries column identifies how many file entries were configured from your replication patterns and indicates whether any file entries are not journaled on the source or target systems. 447 . Check whether all necessary journaling is started for each data group. Without journaling. Type a 9 (Run rule) next to the audit for each data group and press Enter. or data queues. replication will not occur. Then check the results and resolve any problems. Rules for initial validation. Action is required.Repeat Step 3b and Step 3c for each rule in Table 65 until you have started all the listed audits for all data groups. see “Interpreting audit results” on page 540. 448 . #FILATR 4. Some audits may take time to complete. Table 65. Some of the differences were not automatically recovered. #DLOATR d. Wait for all audits to complete. check the Audit Status column for the following value: *NOTRCVD . You may need to change subsetting values again so you can view all rule and data group combinations at once. #IFSATR 5. View notifications for more information and resolve the problem. Note: For more information about resolving reported problems. #DGFE 2. #OBJATR 3. Reset the number of active audit jobs to values consistent with regular auditing: CHGJOBQE SBSD(MIMIXQGPL/MIMIXSBS) JOBQ(MIMIXQGPL/MIMIXVFY) MAXACT(5) 4. listed in the order to be performed. On the Work with Audits display. #FILATRMBR 6.The comparison performed by the rule detected differences. Rule Name 1. In Step 8 and Step 9. specify its name at the Member prompt. see “About synchronizing file entries (SYNCDGFE command)” on page 439. select option 12 (Compare. 2. 449 . use the procedure from the source system to send database files to the target system. specify the name of the data group to which the file is associated. If you want to hold the file entry for your intervention. Then it will change the status to active. 4. you will need to make choices about the sending mode and trigger support. and Synchronize menu. Alternative Process: You will need to identify the data group and data group file entry in this procedure. select option 41 (Synchronize DG File Entry) and press Enter. verify. 6. all file entries will be synchronized. 7. You should be aware of the information in the following topics: • • “Considerations for synchronizing using MIMIX commands” on page 433 “About synchronizing file entries (SYNCDGFE command)” on page 439. If you use this command when performing the initial synchronization of a data group. 5. From the Work with DG Definitions display. To synchronize a database file between two systems using the SYNCDGFE command defaults. At the Data group definition prompts. ensure that the value matches the system that you want to use as the source for the synchronization. 3. If you want to synchronize only one member of a file. you can type 16 next to the first file entry and then press F13 (Repeat). 2. between two systems. Verify. From the MIMIX Intermediate Main Menu. When you press Enter. At the System 1 file and Library prompts. The Synchronize DG File Entry (SYNCDGFE) display appears. From the MIMIX Compare. The default value *YES for the Release wait prompt indicates that MIMIX will hold the file entry in a release-wait state until a synchronization point is reached. do the following or use the alternative process described below: 1. and synchronize menu) and press Enter.Synchronizing database files Synchronizing database files The procedures in this topic use the Synchronize DG File Entry (SYNCDGFE) command to synchronize selected database files associated with a data group. 1. At the Data source prompt. The Work with DG File Entries display appears. type 17 (File entries) next to the data group to which the file you want to synchronize is defined and press Enter. Type 16 (Sync DG file entry) next to the file entry for the file you want to synchronize and press Enter. specify the name of the database file you want to synchronize and the library in which it is located on system 1. Note: If you are synchronizing file entries as part of your initial configuration. specify *NO. For additional information. press Enter 450 . At the Save active wait time prompt. 9. At the Disable triggers on file prompt. 14. specify whether the database apply process should disable triggers when processing the file. accept *NO so that objects in use are not saved. To synchronize the file. accept the default or specify *YES to indicate whether certain differences encountered during the restore of the object on the target system should be allowed. To change any of the additional parameters. accept the default or *NO to indicate whether you want to include attached logical files when sending the file. specify the value for the type of data to be synchronized. At the Sending mode prompt. At the Include logical files prompt. press F10 (Additional parameters). specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 12.8. 15. or. Accept *DGFE to use the value specified in the data group file entry or specify another value. At the Allow object differences prompt. Verify that the values shown for Include related files. Maximum sending file size (MB) and Submit to batch are what you want. 11. At the Save active prompt. 13. specify another value. Skip to Step 14. 10. At the Data group definition prompts. 2. The objects to be synchronized can be defined to a data group or can be independent of a data group. accept *ALL to synchronize the entire list of supported attributes or press F4 to select from a list of attributes. accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. skip to Step 5. At the Include or omit prompt. To synchronize a subset of objects defined to the data group. do the following: 1. You can specify as many as 300 object selectors by using the + for more prompt for each selector. and DLOs” on page 437 To synchronize library-based objects associated with a data group To synchronize objects between two systems that are identified for replication by data group object entries. specify the data group for which you want to synchronize objects. For each selector. From the MIMIX Intermediate Main Menu. At the Object type prompt. IFS objects. specify the name or the generic value you want. At the Object attribute prompt. To synchronize all objects identified by data group object entries for this data group. 3. Note: if you run this command from a target system. b. From the MIMIX Compare. at the Object prompts specify elements for one or more object selectors to act as filters to the objects defined to the data group. For more information. select option 12 (Compare. Press Enter. see see “Object selection for Compare and Synchronize commands” on page 360. Note: The System 2 object and System 2 library prompts are ignored when a data group is specified. At the Synchronize authorities prompt. and synchronize menu) and press Enter. c. and Synchronize Menu. accept *ALL or specify a specific object type to synchronize. select option 42 (Synchronize object) and press Enter. do the following: a. The Synchronize Object (SYNCOBJ) command appears. d. Verify. At the Object and library prompts. accept *YES to synchronize both 451 . you must specify the name of a data group to avoid overwriting the objects on the source system. 4. You should be aware of the information in the following topics: • • “Considerations for synchronizing using MIMIX commands” on page 433 “About MIMIX commands for synchronizing objects. 5. e. verify.Synchronizing objects Synchronizing objects The procedures in this topic use the Synchronize Object (SYNCOBJ) command to synchronize library-based objects between two systems. authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved. Or, specify another value. 7. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. Note: When a data group is specified the following parameters are ignored: System 1 ASP group or device, System 2 ASP device number, and System 2 ASP device name. 9. Determine how the synchronize request will be processed. Choose one of the following: • • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started. 10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To start the synchronization, press Enter. To synchronize library-based objects without a data group To synchronize objects between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42 (Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the Object prompts, specify elements for one or more object selectors that identify objects to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see “Object selection for Compare and Synchronize commands” on page 360. For each selector, do the following: a. At the Object and library prompts, specify the name or the generic value you want. b. At the Object type prompt, accept *ALL or specify a specific object type to synchronize. 452 Synchronizing objects c. At the Object attribute prompt, accept *ALL to synchronize the entire list of supported attributes or press F4 to see a valid list of attributes. d. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. e. At the System 2 object and System 2 library prompts, if the object and library names on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the name of the object and library on system 2 to which you want to synchronize the objects. f. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system to which to synchronize the objects. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. Note: When you specify *ONLY and a data group name is not specified, if any files that are processed by this command are cooperatively processed and the data group that contains these files is active, the command could fail if the database apply job has a lock on these files. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. At the Save active wait time, specify the number of seconds to wait for a commit boundary or a lock on the object before continuing the save. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. At the System 1 ASP group or device prompt, specify the name of the auxiliary storage pool (ASP) group or device where objects configured for replication may reside on system 1. Otherwise, accept the default to use the current job’s ASP group name. 11. At the System 2 ASP device number prompt, specify the number of the auxiliary storage pool (ASP) where objects configured for replication may reside on system 2. Otherwise, accept the default to use the same ASP number from which the object was saved (*SAVASP). Only the libraries in the system ASP and any basic user ASPs from system 2 will be in the library name space. 12. At the System 2 ASP device name prompt, specify the name of the auxiliary storage pool (ASP) device where objects configured for replication may reside on system 2. Otherwise, accept the default to use the value specified for the system 1 ASP group or device (*ASPGRP1). 13. Determine how the synchronize request will be processed. Choose one of the following • • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started. 453 14. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 15. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 16. To start the synchronization, press Enter. 454 Synchronizing IFS objects Synchronizing IFS objects The procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to synchronize IFS objects between two systems. The IFS objects to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: • • “Considerations for synchronizing using MIMIX commands” on page 433 “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 437 To synchronize IFS objects associated with a data group To synchronize IFS objects between two systems that are identified for replication by data group IFS entries, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears. 3. At the Data group definition prompts, specify the data group for which you want to synchronize objects. Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. 4. To synchronize all IFS objects identified by data group IFS entries for this data group, skip to Step 5. To synchronize a subset of IFS objects defined to the data group, at the IFS objects prompts specify elements for one or more object selectors to act as filters to the objects defined to the data group. For more information, see “Object selection for Compare and Synchronize commands” on page 360. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want. Note: The IFS object path name can be used alone or in combination with FID values. See Step 12. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize. 455 e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. Note: The System 2 object path name and System 2 name pattern values are ignored when a data group is specified. f. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 7. If you chose values in Step 6 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 9. Determine how the synchronize request will be processed. Choose one of the following: • • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 12. 10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To optionally specify a file identifier (FID) for the object on either system, do the following: a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name. b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name. Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on page 284. 13. To start the synchronization, press Enter. To synchronize IFS objects without a data group To synchronize IFS objects not associated with a data group between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 456 Synchronizing IFS objects 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43 (Synchronize IFS object) and press Enter. The Synchronize IFS Object (SYNCIFS) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the IFS objects prompts, specify elements for one or more object selectors that identify IFS objects to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see the topic on object selection in the MIMIX Reference book. For each selector, do the following: a. At the Object path name prompt, you can optionally accept *ALL or specify the name or generic value you want. Note: The IFS object path name can be used alone or in combination with FID values. See Step 13. b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the scope of IFS objects to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the IFS object path name. d. At the Object type prompt, accept *ALL or specify a specific IFS object type to synchronize. e. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. f. At the System 2 object path name and System 2 name pattern prompts, if the IFS object path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the IFS objects. g. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the IFS objects. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. If you chose values in Step 7 to save active objects, you can optionally specify additional options at the Save active option prompt. Press F1 (Help) for additional information. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. Determine how the synchronize request will be processed. Choose one of the following: • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. 457 • To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. Continue with Step 13. 11. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 12. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 13. To optionally specify a file identifier (FID) for the object on either system, do the following: a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS object on system 1. Values for System 1 file identifier prompt can be used alone or in combination with the IFS object path name. b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS object on system 2. Values for System 2 file identifier prompt can be used alone or in combination with the IFS object path name. Note: For more information, see “Using file identifiers (FIDs) for IFS objects” on page 284. 14. To start the synchronization, press Enter. 458 Synchronizing DLOs Synchronizing DLOs The procedures in this topic use the Synchronize DLO (SYNCDLO) command to synchronize document library objects (DLOs) between two systems. The DLOs to be synchronized can be defined to a data group or can be independent of a data group. You should be aware of the information in the following topics: • • “Considerations for synchronizing using MIMIX commands” on page 433 “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 437 To synchronize DLOs associated with a data group To synchronize DLOs between two systems that are identified for replication by data group DLO entries, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44 (Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO) command appears. 3. At the Data group definition prompts, specify the data group for which you want to synchronize DLOs. Note: if you run this command from a target system, you must specify the name of a data group to avoid overwriting the objects on the source system. 4. To synchronize all objects identified by data group DLO entries for this data group, skip to Step 5. To synchronize a subset of objects defined to the data group, at the Document library objects prompts specify elements for one or more object selectors to act as filters to DLOs defined to the data group. For more information, see “Object selection for Compare and Synchronize commands” on page 360. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For each selector, do the following: a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of DLOs to be processed. c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize. e. At the Owner prompt, accept *ALL or specify the owner of the DLO. f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. Note: The System 2 DLO path name and System 2 DLO name pattern values 459 are ignored when a data group is specified. g. Press Enter. 5. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 6. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 7. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save. 8. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 9. Determine how the synchronize request will be processed. Choose one of the following: • • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started. 10. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 11. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 12. To start the synchronization, press Enter. To synchronize DLOs without a data group To synchronize DLOs between two systems, do the following: 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44 (Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO) command appears. 3. At the Data group definition prompts, specify *NONE. 4. At the Document library objects prompts, specify elements for one or more object selectors that identify DLOs to synchronize. You can specify as many as 300 object selectors by using the + for more prompt for each selector. For more information, see “Object selection for Compare and Synchronize commands” on page 360. For each selector, do the following: a. At the DLO path name prompt, accept *ALL or specify the name or the generic value you want. b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the scope of DLOs to be processed. 460 Synchronizing DLOs c. At the Name pattern prompt, specify a value if you want to place an additional filter on the last component of the DLO path name. d. At the DLO type prompt, accept *ALL or specify a specific DLO type to synchronize. e. At the Owner prompt, accept *ALL or specify the owner of the DLO. f. At the Include or omit prompt, accept *INCLUDE to include the object for synchronization or specify *OMIT to omit the object from synchronization. g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if the DLO path name and name pattern on system 2 are equal to the system 1 names, accept the defaults. Otherwise, specify the path name and pattern on system 2 to which you want to synchronize the DLOs. h. Press Enter. 5. At the System 2 parameter prompt, specify the name of the remote system on which to synchronize the DLOs. 6. At the Synchronize authorities prompt, accept *YES to synchronize both authorities and objects or specify another value. 7. At the Save active prompt, accept *NO to specify that objects in use are not saved or specify another value. 8. At the Save active wait time, specify the number of seconds to wait for a lock on the object before continuing the save. 9. At the Maximum sending size (MB) prompt, specify the maximum size that an object can be and still be synchronized. 10. Determine how the synchronize request will be processed. Choose one of the following: • • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started. 11. At the Submit to batch prompt, do one of the following: • • If you do not want to submit the job for batch processing, specify *NO and press Enter to start the comparison. To submit the job for batch processing, accept the default. Press Enter and continue with the next step. 12. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 13. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 14. To start the synchronization, press Enter. 461 Synchronizing data group activity entries The procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE) command to synchronize an object that is identified by a data group activity entry with any status value—*ACTIVE, *DELAYED, *FAILED, or *COMPLETED. You should be aware of the information in the following topics: • • “Considerations for synchronizing using MIMIX commands” on page 433 “About synchronizing data group activity entries (SYNCDGACTE)” on page 438 To synchronize an object identified by a data group activity entry, do the following: 1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next to the activity entry that identifies the object you want to synchronize and press Enter. 2. The Confirm Synchronize of Object display appears. Press Enter to confirm the synchronization. Alternative Process: You will need to identify the data group and data group activity entry in this procedure. 1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and synchronize menu) and press Enter. 2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45 (Synchronize DG File Entry) and press Enter. 3. At the Data group definition prompts, specify the data group name. 4. At the Object type prompt, specify a specific object type to synchronize or press F4 to see a valid list. 5. Additional parameters appear based on the object type selected. Do one of the following: • • • • For files, you will see the Object, Library, and Member prompts. Specify the object, library and member that you want to synchronize. For objects, you will see the Object and Library prompts. Specify the object and library of the object you want to synchronize. For IFS objects, you will see the IFS object prompt. Specify the IFS object that you want to synchronize. For DLOs, you will see the Document library object and Folder prompts. Specify the folder path and DLO name of the DLO you want to synchronize. 6. Determine how the synchronize request will be processed. Choose one of the following: • • To submit the job for batch processing, accept the default value *YES for the Submit to batch prompt and press Enter. Continue with the next step. To not use batch processing for the job, specify *NO for the Submit to batch prompt and press Enter. The request to synchronize will be started. 462 Synchronizing data group activity entries 7. At the Job description and Library prompts, specify the name and library of the job description used to submit the batch request. 8. At the Job name prompt, accept *CMD to use the command name to identify the job or specify a simple name. 9. To start the synchronization, press Enter. 463 Synchronizing tracking entries Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data queues configured for replication with MIMIX advanced journaling. You can use a tracking entry to synchronize the contents, attributes, and authorities of the item it represents. You should be aware of the information in the following topics: • • • “Considerations for synchronizing using MIMIX commands” on page 433 “About MIMIX commands for synchronizing objects, IFS objects, and DLOs” on page 437 “About synchronizing tracking entries” on page 441 To synchronize an IFS tracking entry To synchronize an object represented by an IFS tracking entry, do the following: 1. From the Work with DG IFS Tracking Entries (WRKDGIFSTE) display, type option 16 (Synchronize) next to the IFS tracking entry you want to synchronize. If you want to change options on the command SYNCIFS command, press F4 (Prompt). 2. To synchronize the associated IFS object, press Enter. 3. When the apply session has been notified that the object has been synchronized, the status will change to *ACTIVE. To monitor the status, press F5 (Refresh). 4. If the synchronization fails, correct the errors and repeat the previous steps. To synchronize an object tracking entry To synchronize an object represented by an object tracking entry, do the following: 1. From the Work with DG Object Tracking Entries (WRKDGOBJTE) display, type option 16 (Synchronize) next to the object tracking entry you want to synchronize. If you want to change options on the SYNCOBJ command, press F4 (Prompt). 2. To synchronize the associated data area or data queue, press Enter. 3. When the apply session has been notified that the object has been synchronized, the status will change to *ACTIVE. To monitor the status, press F5 (Refresh). 4. If the synchronization fails, correct the errors and repeat the previous steps. 464 Sending library-based objects Sending library-based objects This procedure sends one or more library-based objects between two systems using the Send Network Object (SNDNETOBJ) command. Use the appropriate command: In general, you should use the SYNCOBJ command to synchronize objects between systems. For more information about differences between commands, see “Performing the initial synchronization” on page 442. You should be familiar with the information in the following topics before you use this command: • • • “Considerations for synchronizing using MIMIX commands” on page 433 “Synchronizing user profiles with the SNDNETOBJ command” on page 434 “Missing system distribution directory entries automatically added” on page 435 To send library-based objects between two systems, do the following: 1. If the objects you are sending are located in an independent auxiliary storage pool (ASP) on the source system, you must use the IBM command Set ASP Group (SETASPGRP) on the local system to change the ASP group for your job. This allows MIMIX to access the objects. 2. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and press Enter. 3. The MIMIX Utilities Menu appears. Select option 11 (Send object) and press Enter. 4. The Send Network Object (SNDNETOBJ) display appears. At the Object prompt, specify either *ALL, the name of an object, or a generic name. Note: You can specify as many as 50 objects. To expand this prompt for multiple entries, type a plus sign (+) at the prompt and press Enter. 5. Specify the name of the library that contains the objects at the Library prompt. 6. Specify the type of objects to be sent from the specified library at the Object type prompt. Notes: • If you specify *ALL, all object types supported by the IBM i Save Object (SAVOBJ) command are selected. The single values that are listed for this parameter are not included when *ALL is specified because they are not supported by the IBM i SAVOBJ command. To expand this field for multiple entries, type a plus sign (+) at the prompt and press Enter. • 7. Press Enter. 8. Additional prompts appear on the display. Do the following: a. Specify the name of the system to which you are sending objects at the Remote system prompt. 465 authority to the object on the remote system is determined by that system. Specifying 2 through 32 restores values to the basic user ASP specified. *JRNRCV. Specifying a value of 1 restores objects to the system ASP. place the cursor on the prompt and press F1 (Help). objects are restored to the same ASP device or number from which they were saved. The remaining prompts on the display are used for objects synchronized via a save and restore operation. Verify that the values shown are what you want. IBM restricts which object types are allowed in user ASPs. To have the authorities on the remote system determined by the settings of the local system. press F10 (Additional parameters). If the specified ASP number does not exist on the target system or if it has overflowed. By default. 466 . If the library on the remote system has a different name. then specify a value for either the Restore to ASP device prompt or the Restore to ASP number prompt. To change the location where objects are restored. Some object types may not be restored to user ASPs.b. Note: Object types *JRN. then specify *SRC at the Target authority prompt. 9. press Enter. 10. To start sending the specified objects. the objects are placed in the system ASP on the target system. press F10 (Additional parameters). 11. By default. *LIB. To see a description of each prompt and its available values. c. specify its name at the Remote library prompt. and *SAVF can be restored to any ASP. Additional parameters appear which MIMIX uses in the save and restore operations. 5. You should be familiar with the information in “Considerations for synchronizing using MIMIX commands” on page 433. At the Object prompt. you should use the SYNCIFS command to synchronize IFS objects between systems. do the following: 1. From the MIMIX Intermediate Main Menu. 4. the name of the IFS object to send. 6. For more information about differences between commands. Use the appropriate command: In general. place the cursor on the prompt and press F1 (Help). To start sending the specified IFS objects. The MIMIX Utilities Menu appears. 467 . 7. Select option 13 (Send IFS object) and press Enter. see “Performing the initial synchronization” on page 442. Press F10 (Additional parameters). To send IFS objects between two systems. Verify that the values shown for the additional prompts are what you want.Sending IFS objects Sending IFS objects This procedure uses IBM i save and restore functions to send one or more integrated files system (IFS) objects between two systems with the Send Network IFS (SNDNETIFS) command. press Enter. Note: You can specify as many as 30 IFS objects. To expand this prompt for multiple entries. Specify the name of the system to which you are sending IFS objects at the Remote system prompt. 3. select option 13 (Utilities menu) and press Enter. The Send Network IFS (SNDNETIFS) display appears. type a plus sign (+) at the prompt and press Enter. To see a description of each prompt and its available values. 2. 10. Save active. Save active wait time. 4. and Allow object differences prompts in the save and restore operations. 2. Press F10 (Additional parameters). Note: You can specify multiple DLOs. Specify a folder name in the Folder field and a network system name in the Remote system field. Use the appropriate command: In general. use this procedure from the source system to send DLOs to the target system for replication. Specify the name of the folder that contains the DLOs at the Folder prompt. By default. press Enter. 5. To expand this prompt for multiple entries. Additional parameters appear on the display. At the Document library object prompt. specify *SRC at the Target authority prompt. To start sending the specified DLOs. To have the authorities on the remote system determined by the settings of the local system. Verify that the values shown are what you want. From the MIMIX Intermediate Main Menu. To send DLO objects between systems. 468 .Sending DLO objects This procedure uses IBM i save and restore functions to send one or more document library objects (DLOs) between two systems using the Send Network DLO (SNDNETDLO) command. When you are configuring for system journal replication. 6. 3. The MIMIX Utilities Menu appears. see “Performing the initial synchronization” on page 442. Select option 12 (Send DLO object) and press Enter. specify either *ALL or the name of the DLO. To see a description of each prompt and its available values. authority to the object on the remote system is determined by that system. select option 13 (Utilities menu) and press Enter. For more information about differences between commands. you should use the SYNCDLO command to synchronize objects between systems. 9. place the cursor on the prompt and press F1 (Help). MIMIX uses the Remote folder. 7. type a plus sign (+) at the prompt and press Enter. Specify the name of the system to which you are sending DLOs at the Remote system prompt. The Send Network DLO (SNDNETDLO) display appears. do the following: 1. 8. You should be familiar with the information in “Considerations for synchronizing using MIMIX commands” on page 433. The topics in this chapter include: • • “Support for customizing” on page 470 describes several functions you can use to customize your replication environment. “Completion and escape messages for comparison commands” on page 472 lists completion. RUNCMDS” on page 487 provides procedures for using run commands with a specific protocol or by specifying a protocol through existing MIMIX configuration elements. The MIMIX message log provides a common location to see messages from all MIMIX products. “Procedures for running commands RUNCMD. including outfiles. “Displaying a list of commands in a library” on page 485 describes how to display the super set of all commands known to License Manager or subset the list by a particular library. diagnostic. MIMIX supports batch output jobs on numerous commands and provides several forms of output. “Adding messages to the MIMIX message log” on page 479 describes how you can include your own messaging from automation programs in the MIMIX message log. Commands are typically set with default values that reflect the recommendation of Vision Solutions. “Running commands on a remote system” on page 486 describes how to run a single command or multiple commands on a remote system. see “Output and batch guidelines” on page 480. For more information.CHAPTER 21 Introduction to programming MIMIX includes a variety of functions that you can use to extend MIMIX capabilities through automation and customization. and escape messages generated by comparison commands. “Changing command defaults” on page 494 provides a method for customizing default values should your business needs require it. • • • • • • • 469 . “Using lists of retrieve commands” on page 493 identifies how to use MIMIX list commands to include retrieve commands in automation. Collision resolution In the context of high availability. For more information. You can specify collision resolution methods for a data group or for individual data group file entries. MIMIX provides user exit points for journal receiver management. Examples of these detected conditions include the following: • • • • Updating a record that does not exist Deleting a record that does not exist Writing to a record that already exists Updating a record for which the current record information does not match the before image The database apply process contains 12 collision points at which MIMIX can attempt to resolve a collision. a collision is a clash of data that occurs when a target object and a source object are both updated at the same time. With MIMIX user journal replication. This process is called collision resolution.Support for customizing MIMIX includes several functions that you can use to customize processing within your replication environment. You can also specify a named collision resolution class. MIMIX provides additional ways to automatically resolve detected collisions without user intervention. If a collision does occur. “Customizing with exit point programs. When the change to the source object is replicated to the target object. With collision resolution. see Chapter 22. the data does not match and the collision is detected. MIMIX attempts the specified collision resolution methods until either the collision is resolved or the file is placed on hold. the definition of a collision is expanded to include any condition where the status of a file or a record is not what MIMIX determines it should be when MIMIX applies a journal transaction. When a collision is detected. User exit points User exit points are predefined points within a MIMIX process at which you can call customized programs. MIMIX attempts to fix any problems it detects by synchronizing the file. A collision resolution class allows you to define what type of resolution to use at each of the collision points. User exit points allow you insert customized programs at specific points in an application process to perform additional processing before continuing with the application's processing. you can specify different resolution methods to handle these different types of collisions. by default the file is placed on hold due to an error (*HLDERR) and user action is needed to synchronize the files. Collision resolution classes allow you to specify several methods of resolution to try 470 . If you specify *AUTOSYNC for the collision resolution element of the file entry options. For more information. 471 .Support for customizing for each collision point and support the use of an exit program. see “Collision resolution” on page 345. These additional choices for resolving collisions allow customized solutions for resolving collisions without requiring user action. Escape LVE3E17 – This message indicates that no object matched the specified selection criteria. CMPFILA messages The following are the messages for CMPFILA. see the Using MIMIX book. Informational LVI3E06 – This message indicates that no object was selected to be processed. Diagnostic LVE3385 – This message indicates that differences were detected for an active file. a completion or escape message is issued. For more information about using the message log. The reason the file was not compared is included in the message. a diagnostic message is issued prior to the escape message. with a comparison level specification of *MBR: • • Completion LVI3E05 – This message indicates that all members compared successfully. Escape LVE3381 – This message indicates that compared files were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. You can work with the message log from either MIMIX Availability Manager or the 5250 emulator. Diagnostic LVE3E12 – This message indicates that a file was not compared. In the event of an escape message. this message also includes those differences. specify the name of the command as the process type. All completion or escape messages are sent to the MIMIX message log.Completion and escape messages for comparison commands When the comparison commands finish processing. Escape LVE3E05 – This message indicates that files were compared with differences detected. Diagnostic LVE3E0D – This message indicates that a particular attribute compared differently. • • • • The following are the messages for CMPFILA. To find messages for comparison commands. The diagnostic message provides additional information regarding the error that occurred. Escape LVE3E09 – This message indicates that the CMPFILA command ended abnormally. If the cumulative differences include files that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Diagnostic LVE3388 – This message indicates that differences were detected for 472 . with a comparison level specification of *FILE: • • • • • Completion LVI3E01 – This message indicates that all files were compared successfully. The LVE3E06 message includes the same message data as LVI3E02. • • • The LVI3E02 includes message data containing the number of objects compared. this message also includes those differences. Escape LVE3E07 – This message indicates that IFS objects were compared with differences detected. Informational LVI3E06 – This message indicates that no object was selected to be processed. If the cumulative differences include objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3E17 – This message indicates that no object matched the specified selection criteria. and also includes the number of differences detected. • Escape LVE3E16 – This message indicates that members were compared with differences detected. Escape LVE3E06 – This message indicates that objects were compared and differences were detected. CMPOBJA messages The following are the messages for CMPOBJA: • • • Completion LVI3E02 – This message indicates that objects were compared but no differences were detected. If the cumulative differences include IFS objects that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. and the system 2 name. • 473 . Diagnostic LVE3E0F – This message indicates that a particular attribute was compared differently. If the cumulative differences include members that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. this message also includes those differences. CMPIFSA messages The following are the messages for CMPIFSA: • • • • Completion LVI3E03 – This message indicates that all IFS objects were compared successfully. Diagnostic LVE3386 – This message indicates that differences were detected for an active IFS object.Completion and escape messages for comparison commands an active member. this message also includes those differences. The reason the IFS object was not compared is included in the message. Diagnostic LVE3E14 – This message indicates that a IFS object was not compared. Escape LVE3380 – This message indicates that compared objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. the system 1 name. Diagnostic LVE3384 – This message indicates that differences were detected for an active object. Escape LVE3E17 – This message indicates that no object matched the specified selection criteria. The reason the DLO was not compared is included in the message. • • • • CMPRCDCNT messages The following are the messages for CMPRCDCNT: • • • Escape LVE3D4D – This message indicates that ACTIVE(*YES) outfile processing failed and identifies the reason code. • • • CMPDLOA messages The following are the messages for CMPDLOA: • • • • • Completion LVI3E04 – This message indicates that all DLOs were compared successfully. Escape LVE3E17 – This message indicates that no object matched the specified selection criteria. Escape LVE3E08 – This message indicates that DLOs were compared and differences were detected. Diagnostic LVE3E11 – This message indicates that a particular attribute compared differently. Diagnostic LVE3E15 – This message indicates that a DLO was not compared. Informational LVI3E06 – This message indicates that no object was selected to be processed.• Escape LVE3382 – This message indicates that compared IFS objects were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. If the cumulative differences include DLOs that were different but active within the time span specified on the Maximum replication lag (MAXREPLAG) parameter. Escape LVE3D5F – This message indicates that an apply session exceeded the 474 . Informational LVI3E06 – This m