Implementation Guide X6 1.9.96 13



Comments



Description

Lenovo® X6 Systems Solution™ for SAP HANA®Implementation Guide for System x® X6 Servers Lenovo Development for SAP Solutions In cooperation with: SAP AG Created on 3rd July 2015 15:13 – Version 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation X6 Systems Solution for SAP HANA Platform Edition Dear Reader, We wish to explicitly announce that this guide book is for the System X6 based servers for SAP HANA Platform Edition (Type 6241 Model AC3/AC4/Hxx) based on Intel® Xeon® IvyBridge or Haswell EX Family of Processors. Type 3837 of System X6 based servers and the System eX5 based servers for SAP HANA Platform Edition (models 7147-H** and 7143-H**) are not discussed in this manual. The Lenovo Systems X6 solution for SAP HANA Platform Edition is based on System X6 Architecture building blocks that provide a highly scalable infrastructure for the SAP HANA Platform Edition appliance software. The Systems x3850 X6, x3950 X6 and software, such as IBM General Parallel File System™ (GPFS) will be used to run SAP HANA Platform Edition appliance software. Lenovo has created orderable models upon which you may install and run the SAP HANA Platform Edition appliance software according to the sizing charts coordinated with SAP AG. For each workload type, special ordering options for the System x3850 X6 and System x3950 X6 Type 6241 Models AC3/AC4/Hxx have been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition appliance software. The Lenovo – SAP HANA Development Team X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 I Technical Documentation Copyrights and Trademarks © Copyright 2010-2015 Lenovo. Lenovo may not offer the products, services, or features discussed in this document in all countries. Consulty our local Lenovo representative for information on the products and services currently available in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: Lenovo (United States), Inc. 1009 Think Place - Building One Morrisville, NC 27560 U.S.A. Attention: Lenovo Director of Licensing LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Neither this documentation nor any part of it may be copied or reproduced in any form or by any means or translated into another language, without the prior consent of Lenovo. This document could include technical inaccurancies or errors. The information contained in this document is subject to change without any notice. Lenovo reserves the right to make any such changes without obligation to notify any person of such revision or changes. Lenovo makes no commitment to keep the information contained herein up to date. Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk. Information concerning non-Lenovo products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide home pages. Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the supplier of those products. Edition Notice: 3rd July 2015 This is the thirteenth published edition of this document. The online copy is the master. X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 II Technical Documentation Lenovo, the Lenovo logo, System x and For Those Who Do are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. Other product and service names might be trademarks of Lenovo or other companies. A current list of Lenovo trademarks is available on the web at: http://www.lenovo.com/legal/copytrade.html. IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in the United States and/or other countries. Adobe and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Fusion-io is a registered trademark of Fusion-io, in the United States. Intel, Intel Xeon, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. SAP HANA is a trademark of SAP Corporation in the United States, other countries, or both. Other company, product or service names may be trademarks or service marks of others. X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 III . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Applicability . . . . .1 Network Switch Options . . 4. . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . 5. . . . . . . . . . . . . 4. . .1 Networking Requirements .Technical Documentation Contents 1 Abstract 1. 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . 2. . . . . .3 Exclusions and Exceptions . . . . 4. . . . . . . . . . . . . . .3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit . . . . .4. . . . .4 Internal Networks – Option 3 G8272 RackSwitch 10Gbit . . . . . . . . .1 Network Interface Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X6 Implementation Guide 1. . . . . . . . . .4 Conventions . 5. . . . . . . . Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . . . .4. . . .3 Feedback . . . . . . . . . . . . . . . . .6 System x3950 X6 Cluster Node Configurations . . . . . . . . . . . . . . . . . . . . . 7 7 7 4 Hardware Configurations 4.3. . .3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion 4. . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Setting up the Switches . . .6. . . .6. . . . . . . . . .7 Network Configurations in a Clustered Environment . . . . . . . . . .1 SAP HANA Platform Edition T-Shirt Sizes . . . . . . . . . . . . .5 Internal Networks – Option 4 G8296 RackSwitch 10Gbit .3. . . . . . . . . . . . . . . . . . . . .3 RAID Adapter Cards . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . 4.6. 2. . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . .1 Icons Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Single Node versus Clustered Configuration . . . . 9 10 10 11 13 13 13 14 14 14 15 15 15 15 16 5 Networking 5. . . . . . .3. . 5. . 5. . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 SAP HANA Optimized Hardware Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Network Switch Configuration For Clustered Installations . .2. .1 SAP HANA Platform Edition 2. . . . .2 System x3950 X6 Single Node Configurations . . . .6. . . . . . . . . . . . . . . . . . . . . 2 Introduction 2. . . . . . . . . . .2 Acknowledgements 1. . . . . . . . . . . . . . . . 4.2 Code Snippets . . . .2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . . . .5 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.2. . . . . 6 6 6 6 6 6 6 7 3 Solution Overview 3. . . . . . . . . . . . . .1 Numbering conventions . . . 4. . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2015 29 30 31 IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Network Definitions . . .3. . . . 1. . . 5. . . . .2 Slots for additional Network Interface Cards . . . SAP-Access and Backup Networks – Option G8052 RackSwitch 1Gbit . . . . . .1 Purpose . . . . . . .4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations . . . . . . . . . . . . . . . . 4. . . . . . . . . . .5 Customer Site Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . .4 Disclaimer . . . .2 Jumbo Frames . . . .2 Definition of SAP HANA . . . . . . . . . . . . .4 Card Placement . . Versions . . . . . . . . . 4. . . . . . . .1 The SAP HANA Appliance Software . . . . 4. .5 System x3850 X6 Cluster Node Configurations with Storage Expansion . . . . . . .3. . . . . . . .6. . . . . . . . . . .1 Preface & Scope . . . . . . . . 21 21 21 22 23 24 24 24 24 26 27 28 . . . . . . . . . . . . . . . . . . . . . . 4. 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Administrative. . . . . . 1 2 2 3 3 3 . . . . . .96-13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Network Configuration . . . . . . . . . . . . . . . . . . . . .1 System x3850 X6 Single Node Configurations . 5. . . 5. . . . . . . . . . . . . . . . . . . . . . 5. . . .4. . . . . . . .6. . . . . . .9. . . . .7. . . . . . . . . . . . . . . .6 Phase 3 . . . . . . . . . . .2 Portchannel over two Inter-Site Links . . . . .2 Mixing eX5/X6 Server in a DR Cluster . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . .1 Verification of RAID Controller and HDD/SSD Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Static Trunk over one Inter-Site Link . .6. . . .2 Advanced Setup of the Switches .4 Card Placement . . . . . . . . . . . .1 Architecture . . . . . .2 Installation without Network Connectivity . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 HANA Installation . . . . . . . .2 Phase 1 . . . . . .7. . . . . 5. .3 Disable Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . .6. . . . . . . . . . . . . . . . . . . . V . . . . . . . . . .1 Script Usage . . . . . . . . .8. . . . . .5 Enable L4Port Hash . . . . . . . . . . . . 5. . . . . .2.3 Portchannel over four Inter-Site Links . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . 5. . .2 Architectural overview . . . . . . . .2 Tiebreaker Site C (optional) . . .1. . . . . . . .3 Starting the Automatic Installation Process . . . . . . . . . . . . .3 Phase 2 – SLES for SAP . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . 66 66 67 8 Disaster Recovery 8. . . . . . . . . . . . . 5. 6. . . . . . . . . . . . . . . X6 Implementation Guide 1. . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . .1. . . . Firmware and Drivers . . . . . . . . . . .Technical Documentation . . . .3. . . . . 5. . . . . . . . . . . . . . 41 42 42 42 43 45 45 48 48 51 52 53 58 60 61 62 62 62 62 65 7 After Installation 7. . . . . . . . .7 Add Networking . . . . . . . . . . . . . . 5. . . 6. . . . . . . 6. . .8.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . .1. . . . . . . . . . .1 Firewall Preparations . . . .9. . . . . . . . . . .3. . . . . . . . . . .1 Storage Configuration – RAID Setup . . .3 Acquire TCP/IP addresses and host names 68 68 68 69 71 71 71 71 71 72 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Hardware UEFI Configuration . 5. . . . .7. . . . . . . . . . . . . . . . . . . . . . .3 Three site/Tiebreaker node architecture . . . . . .9 5. . . . . . . . . . . . . . . . . . . . 6. . . . . . .4 Save and Restore Switch Configuration Generation of Switch Configurations . . . .1 Actions to insure the correctness of the installation . . . . . . . . . . . . . . .1 Basic Switch Configuration Setup . . 8. . . . . . . . . . . . . . . . . . . 6. . . . 6. . . . . . . . . . . . . 5. . . 8. . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . .2 HANA Network Setup . . .6. . . . . . . . . . . . . . . . . . . . . . . .1 Preparation . . . . . . . .3 Single Node with HA Installation with Side-car Quorum Solution . . 5. . . . . . . 2015 . . . . . 6. . . . . . . . . . . .8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Disable Default IP Address . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Software. . . 6. . . 5. . . . . . .3 Hardware Setup . . . . .9. . . .1. . . 5. 6. . . 6. . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . .1 Terminology . .2. . . . . . . . . . . . . .5. . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . .7. . 6. 5. . . . . . . . . . . . . . . . . . . . . . . .3 Input Values . . . . . .5 Interim Check . . . . . . 6. . . . 8. . . . .1 Site A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . .4 Phase 2 – RHEL . . . . . . . . . . . . . . . . . . . . .5. . . .9 Save changes to switch FLASH memory Inter-Site Portchannel Configuration . . . . . . . . . . . . . . . .2 Examples . 5. . . . . . . . . . . . . . . . . . . . . .7. . . . 31 32 33 33 33 33 33 33 36 36 36 37 38 38 39 39 39 40 6 Guided Install of the Lenovo Solution 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Lenovo Systems solution for SAP HANA Additional Software Stack 6. . . . . . .8 VLAN configurations . . . .2 Mounting Installation Images using the IMM Virtual Media Center . . . . . . . . .1. .8. 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 5. . . . . 8. .6 Disable Routing . .9. .96-13 . 8. . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . .1 Installation of Mandatory Packages . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . .5 SSH configuration . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . 10. . . 8. . . . . . . .1 Installation and configuration of SLES and IBM GPFS . . . . . . . .3 HANA Backup Node Installation . . . . . . . . . . . . . . . 9. . . . . 10. . . . . . . . . .1 Installation of SAP HANA appliance single node with HA 10. . . . . . . . . . . . 8. . . .2 Setup . . . . . . . . . . . . . . . .6.2 Prerequisites & Limitations . . .9. . . . . . .3 GPFS Disk configuration . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . 10. .1 Definition & Overview . . . . . . 2015 VI . . . . 103 103 104 105 106 107 107 108 108 110 110 110 110 111 111 112 113 X6 Implementation Guide 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . .5 Deviating Operation Instructions 9. .1. . . . . . . .1 Definition & Overview . . . . . . . 9. . . . . . . .9 Create descriptor disk . . 10. . 10. . . . . . . . . . . . . . . . .1. . .5 SAP HANA appliance installation . . . . . . . . . . . .5 HANA . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . 8. . . . . . . . . . . Mixing eX5/X6 Server in a DR Cluster . .1. . . . . . . . . . . . . Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . .2 Prepare quorum node . . . .2. . 10. . .6 Tiebreaker node setup . . . . . . . . . . . . . . .2. . . . . .2. . . . . . . . . . . . . . . . . . . . 9. . 10. . . . . . . . . . . . . . . . . . . . . 8. .96-13 . . . . . . . . . . . . . .5 Deviating Operation Instructions .6 Quorum Node IBM GPFS setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. .12 Installation of SAP HANA . . . . . . . . . . . . . . .7 Setup network connection to tiebreaker node at site C (optional) Software Setup . . . . . . . . . . . . . . . 10. . . .4 Existing Cluster Extension/Node 9. .2 Single Node with stretched HA Installation . . 9.3 New Installation . . .4 Adapt hosts file . . .1.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Quorum Node IBM GPFS installation . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . .4 Network switch setup (GPFS and SAP HANA network) . . . . . . . .3. . . 10. . . . . . . . . . . . . . . 10.Technical Documentation 8. . . . . . . . .4. . . . . . . .2. . . . . . . . . . . . . . . . . . 8. . . . . . . 9. . . . . . . . . . .6 8. . . . . . . . . . . . . .3. .1. . . . .7 8.1.1 Hardware Setup . .1. . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . .6 Network integration into customer infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.1.1. .2 Prerequisites & Limitations . . . . . . . . . . . . . .4 GPFS Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . .2 GPFS Server configuration . . . . . . . . . . . . . . . . .1.1. . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . 72 72 73 73 73 74 76 77 78 79 81 81 83 83 83 83 85 86 86 87 87 88 . . . . . . . . . . .8 Add quorum node . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Link between site A and B . . . . . . . 9. . . . . Extending a DR-Cluster . . 91 91 91 91 92 94 94 97 97 97 98 99 99 10 Special Single Node Installation Scenarios 10. . . . . . . . . . . . . . . . .2. . . . . . . . .7 Verify Installation . . . 10. 9 Mixed eX5/X6 Environments 9. . . . . .2 GPFS Part 1 . . . . . . . Using Non Productive Instances on Inactive DR Site . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Mixed eX5/X6 HA Clusters . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . 8. . 10. . . . . .11 Verify Cluster Setup . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 8. . . . . . .1 Single Node with HA Installation with Side-car Quorum Solution 10. . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Add disk to file system . . .4. . . . . . . . . . . . . .4 Filesystem Creation . . . .2. . .3. . . . . . 8. . . . . . . . . . . . . .2 Mixed eX5/X6 DR Clusters . . . . . . . . . . . . . . 8. . . . . . . . . . . . .1. . . . . . . . . . . . . Replacement . .1 Architecture . . . . . . .4 Existing Cluster Extension/Node 9. . . . . . .2 Installation of SAP HANA . . . . 8.3 Quorum Node Network Setup . . . . . .4. .1 GPFS configuration prerequisites . .5 8. . . . .3 New Installation . . . . . Replacement . . 8. . . . . . . . . . . . . . . . . .5. . . . .7 Expansion Storage Setup for Non-productive SAP HANA Instance .1. . . . . .1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . 126 11 Virtualization 11. . . . . . . . . . . . . . . . 115 10. 128 128 128 129 129 129 133 133 135 135 136 136 137 137 138 150 150 151 152 152 152 12 Upgrading the Hardware Configuration 12. . . . . . . . . . . 119 10. . . . . . . . . . . . . . . . . . . . 12. . . . . .5 Adding CPU Books . . . . . . . . . . . . . . . .6) Installation 11. . . . . . . . . . . . .2 Optional: Expansion Storage Setup for Non-Production Instance . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. . . 11. . 11. . . . . . . . . . . . . . . . . . . . . . . . . . .1 Getting Started . . . . . . . . . . . . . .2 Installation of SAP HANA . . . . . . . . . . . .5 . . . . . . . . . . . . . . . . . . . . . . 11. . .12 Installing VMware vSphere Client . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . 11. .1. . .6 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication122 10. . . . . . . . . . .2 Configuring and Starting VMs with vSphere Client . . . . . . . . .1. . . 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . 11. . . . . . . . . .3. . . 115 10. .4 Configure Management Network of ESXi Hypervisor . . . . . . . . 115 10. . . . . . 11. . . . 11. . . . . . .4. . . . . . . . . . . . . . . . . . . . . . 2015 VII . .5 and 6. . 11. . . .7 Changing the CacheCade RAID Level . . .11 Restart VMware ESXi Hypervisor . . . . . . . . . . . . . . . . 11. .Technical Documentation 10. . . . .3. . . . . . . . . 121 10. . . . . . . . . . . . . . . . . . . . . . . . . .3.1 Installation and configuration of SLES and IBM GPFS . . . 11. . . . . . . . . . . . . . .2 Adding storage on second internal M5210 controller .6. . . . . . . . . . . . . .3. . . . . . . . . . . .6 Configuring RAID array with existing CacheCade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Tuning of OS . . . . . . . . . . . . .4 Adding memory . . . . . . . . . . . . . . . . . . . . . . . . . 11.1. . . . 12. . . .3. . . . . . . . . . . 11. . . . . . . . . . . . . . . . . . .3 Operating System (SLES for SAP 11 SP3) Installation . . . . . . .4. . . . . 11. . . . .1.1 Memory Overhead . . . . . . . . . . . . . . . . . . . . 11. . . . . . . . 12. . . . . . 124 10. . . . . . . . . . . . . . . . . .5. . . . .2 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . . . . . . . . . . . . . . . . 116 10. . . . .1 Power Policy Configuration . . . 12. 125 10. . . . . . . . . . . 12. . .3. . . . . . . . . . . . . . . . . . . . . 12. . . . . . . . . . .4. . . . . .2 Tuning of ESXi and VM . .1. . . . . . 162 X6 Implementation Guide 1. . .3. . 12. . . . .4 Operating System (Red Hat Enterprise Server 6. . . .4 Single Node with HA and DR Installation . . . . . . . . . . . . . . . . . . .5 Configuring RAID array when CacheCade is not yet configured 12. . . . . . . . . . . . . . . .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . .3.7 Setting up ESXi Storage in CLI . . . . . . .3 Start Embedded VMware ESXi Hypervisor . . . . .1.1. .5. . . .2 Configure UEFI . . . . . . . 154 154 154 156 156 156 157 158 158 158 158 159 160 161 13 Software Updates 162 13. . . . . . . . . . . . . . . . . . . .9 setting up nic bonding(teaming) . . . . . . 12. . . . 118 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1. . . . . . . . . . . . . . . . . . . . . . . . 11. . . .2 Reboot Behavior . . . . . . . . . . . . . . . . .2 Installation of SAP HANA . 12. .5 Single Node DR Installation with SAP HANA System Replication . . . . . . . . . . . . . . . . . . . . . . .8 setting up vswitches . . . . . . . . . . . . . .5 Tuning of Operating System and VM . .1 Changes after Red Hat Installation . . . . . . . . . . . . . . . . . . . . .3 Adding storage . . . . . . . . . . . .3 Optional: Expansion Storage Setup for Non-Production Instance . . . .3. .1 Adding storage via EXP2524 . .3 Configure RAID array(s) . . .4 Deciding for a CacheCade RAID Level . . . . . . . . .3 Single Node with DR Installation . . 113 10. . . 12. . . . . . . .6 StorCLI on VMware ESXi 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Warning . . . . . . . . 11. . . . . . . . .5 Enable SSH on VMware ESXi Hypervisor . . . . . . . . . . . .10 Setting Storage for SLES and HANA ISO .9. . . . . . . . . . . . . . .5. .8 Configuring GPFS . . . . .1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . .1. . . 120 10. . . . . . . . . . . . . . . . 121 10.1 Installation and configuration of SLES and IBM GPFS . . . . . .10Update of nss-softokn packages . . . . . . . . .8 Upgrading Red Hat . 14. . . . . . . . . . . . .1 SLES Kernel Update Methods . .6. . . . . 13. . . . . . . . . . . . 2015 197 VIII . . . . .1 . . . . . . . . . . . . . . . . . . . . . . .1 General per node update procedure . . . . . . .3 Full Cluster Rolling Update . . . . . . . . . . . . . . . . . . . . . . . . . . . 15. . . . . . . . . 14. . 14.4 Linux Kernel Update . . . . . . . . 14. . . . . . . . . . . . . . . . . . . .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . .8 SAP HANA . . . .2 RHEL Kernel Update Methods . . . . 162 162 164 164 164 165 165 166 166 167 168 171 172 174 176 177 177 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Prerequisites . . . . . . . . . .2 Drive Partitions . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 to 4. 13. . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16. . . . . . . . . .4. . . . . . . . .5 to 13. . . . . . . . . . . . . . . . . . . 14. . . . 13. . . . . . . . . .2 Rolling upgrade per node from GPFS 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Upgrade VMware ESXi 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 to 5. .4. . . . . . . . . . . . . . . . . . . . .1 System Login . 15. . . . . . . . . . . . . . 13. . . . . . . . .3 System Support . . . . . . . . . . 13. . . . . . . 13. . . . . 14. . . . . 189 189 190 191 191 192 193 195 195 14 Operating System Upgrade 14. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . . . . .9 Mandatory Kernel Update . . . . . . . . 16. . . . . . . . . . .4. . . . . . 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 to 6. . 17 SAP HANA Backup and Recovery X6 Implementation Guide 1. . . . . . . . . . . . . . . . . . . . . . .9. . . .5 Updating GPFS .Technical Documentation 13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Correcting the backup fstab . . 13. 14. 13. . . . . . . . . . . . . . . . . . . . . . . .3 Upgrade Overview . . . . . . . . . . . . . . . . . . . . . .2 Update Variants . . . . . . . . . . . . . . . . .3 RHEL versionlock . . . . . . . . . . . . . . 13. . . .2. . . . . . .2 Add boot loader entry for backup partition 16. . .12Adapting Configuration . . . . . 13.13Start IBM GPFS and HANA . .6 . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Lenovo Advanced Settings Utility . . . . . . . . . . . . . . . . . . . . . 16. . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . .2 Rolling Upgrade . . . . . . 16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Boot Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . .2 Basic System Check .4 Restoring the operating system . . . . . . . . . . . . . . . .5U2 . . . . . . . . . . . . . . . . .5. . . .3 SSD Wear Gauge CLI utility . . . . . . . . . . . . . 15. . . . . . . . . .1 Description . . . .6 Upgrading from GPFS 3. . . . . . . . . . . .1 Disruptive GPFS Cluster Update . . . . .2 Disruptive Cluster Update . . . . . . . . . . . . . . . . . . . . . 13. . . . . . . . . . . 14. .3 Backup of the Linux operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11Recompile Linux Kernel Modules 14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Update Mellanox Network Cards . . . . . . . . 15. . . . . . . . . . . . . . . . . . . . . . . . . .1 . . . . . . . . 13.2.2 ServeRAID StorCLI Utility for Storage Management 15. . . . .5 to 4. . . . . . . . . . . . . . . .1 Upgrade RHEL 6. . .6. . . . .3 Kernel Update Procedure . . . . . . . . . . . . . .5 Shutting down services . . . . . . . . . . . . . . . . . . .5 Getting Support (IBM PMR. . 178 178 178 178 179 179 179 180 180 181 181 181 181 182 15 System Check and Support 15. . . . . . . . . . . . 183 183 183 186 187 187 187 187 188 16 Backup and Restore of the Primary Partition 16. . . . . . . . . . . . . . . . . . . . . . . . . .1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 16. . . . . . .6 Upgrade of IBM GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . 14. . . . . 14. . . . 14. . . . SAP OSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Additional Tools for System Checks . . . . . . . .7 Update Mellanox Drivers . . . .1 Disruptive Upgrade from GPFS 3. . 15. . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2015 IX . F. . F. . . . . . G. . . . . . . . . . . . . . . . .12 FAQ #12: GPFS pagepool should be set to 4GB . . . . . . . . . . . F.6 Red Hat Enterprise Linux References (Red Hat account required) . . . . .1 Adding SAP HANA Worker/Standby Nodes in a Cluster . . . . . . . . . . . . . . F. . . . . . . . . . . . . . . . .1 Lenovo References . . . . .96-13 229 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . 197 17. . 204 204 204 204 205 205 205 205 Appendices 207 A GPFS Disk Descriptor Files 207 B Topology Vectors (GPFS 3. . . .2 GPFS mount points missing after Kernel Update .1 SAP Note 1641148 HANA server hang caused by GPFS issue . . . . . . . 216 216 216 216 217 217 220 220 221 221 222 222 223 223 224 225 225 G References G. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. . . . . . . . . . . F. . . F. . . .2 IBM References . . G. . .5 failure groups) 207 C Quotas 209 C. . . . . .4 SAP Notes (SAP Service Marketplace ID required) . . . . . . . . . . . .11 FAQ #11: GPFS NSD on Devices with GPT Labels . . . . . . . . . . . . . .5 Known Kernel Updates . F. . . .9.1 Description . . . . . . . . . . . . . 18. . . 201 18 Troubleshooting 18. . . . . . . . . .2 Use recommended Firmware version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Quota Calculation Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 FAQ #5: Missing RPMs . . . . .14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup . . . . . . . . . . . . . . . F. . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. F. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. . . . . . . . . . . . . . . . . . . . 18. . . . 209 C. . . . 197 17. . . . . . . . .9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues . . . . . . . . . . . . . . . . . . . . .3 FAQ #3: SAP HANA Memory Limit on XS sized Machines . . . . 209 D Performance Settings 211 E Lenovo X6 Server MTM List & Model Overview 214 F Frequently Asked Questions F. . . . . . . .1 FAQ #1: SAP HANA Memory Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. . . . . . . . . . . . . . .3 Restore of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 SAP HANA will not install after a system board exchange . . . . . . . . . .3 SAP General Help (SAP Service Marketplace ID required) . . . . . . . . .6. . . .6 FAQ #6: CPU Governor set to ondemand .1 Changing Queue Depth . . . . . . . . . . . . . . . . . . . . . F. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. . . . . .Technical Documentation 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . 18. . . . . . .2 Backup of SAP HANA . . . . . . . . . . . . . . . . . . . . . .3 Degrading disk I/O throughput . .1 Quota Calculation . . . . . . . . . . . .2 FAQ #2: GPFS parameter readReplicaPolicy . . . . . . . .5 Novell SUSE Linux Enterprise Server References . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. . . G. . . . . . . G. . . . . . . . . . . . . . . .6 Important SAP Notes (SAP Service Marketplace ID required) . . . . . . . . . . . . . . 226 226 226 226 227 228 228 H Changelog X6 Implementation Guide 1.4 FAQ #4: Overlapping NSDs . . . . . . . . . F. . . . . . . . . . . F. . . . . . . . . . . . .13 FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506 F. 18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 FAQ #8: Setting C-States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 FAQ #7: No disk space left bug (Bug IV33610) . . . . . . . . . . . .10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO . . . . . . . . . . . . . . . . . . . 115 Single Node with HADR using IBM GPFS Storage Replication . . . . . . . . . . . . . . . . . . . . . . . . . .Single Node stretched HA . . . . . 116 File System Layout . . . . . 63 HANA Password Input Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 File System Layout . . . . . . . . . . . 54 Network Configuration . . . . . . . . . . . . . . 54 Cluster Node NIC Configuration dialog bond0 . . . . . . 117 File System Layout .Two Site Approach . . . 11 Workload Optimized System x3850 X6 2 Socket Rear View . . . . . . . . . . . . . . . . . . . . . . . . 12 and slots 43. . . . . . . 25 G8124 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 File System of Single Node with HA and DR with System Replication and Storage Expansion126 X6 Implementation Guide 1. . . . . . . . . . . . . . . 63 GPFS IP Configuration Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Single Node with DR with Storage Expansion . . . 88 Single Node with High Availability . . . . . . . . . . . . . . . . . . . . . 107 Single Node with stretched HA . . . . . . . . . . 27 G8296 RackSwitch front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Workload Optimized System x3950 X6 8 Socket Rear View . . . . . 114 Single Node with Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Hostname and Domain Name .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . . . . . . . 124 File System of Single Node with HA and DR with System Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Two Site Approach . . . 58 Installation Mode Selection . . . . . . . . . 68 DR Data Distribution in a Four Node Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Single Node with Disaster Recovery . 31 Cluster Switch Networking Example . . . 10 SAP HANA Clustered Example with Backup . . . . . . . . . 112 Single Node with stretched HA . . . . . . . . . . . . . . . . . . . . . . . . 70 DR Networking View (with no client uplinks shown) . . . . . . . . 29 G8052 RackSwitch front view . . 69 Logical DR Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Hardware Overview . . . . . . . . . . . . 2015 X . . . . . . 64 DR Architectural Overview . . . . . . . . . . . 103 File System Layout . . .9. . . . . . . . .Three Site Approach . . . . . . . . . . . . . . 120 Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Advanced NTP Configuration . . . . . . . . . . . . . . . . . . . . 70 SAP HANA DR using storage expansion . 121 File System Layout of Single Node DR with SAP System Replication with Storage Expansion122 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication123 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication without remote site Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Single Node HA . . 30 Cluster Node Network Diagram . . . . . . . 56 Clock and Time Zone . . . . . . . . . . . . . . . . . 18 Workload Optimized System Storage Book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Password for the System Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Workload Optimized System x3850 X6 4 Socket Rear View . . . . . . . . . . . . . . . . . . . . . . . . . . .Technical Documentation List of Figures 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Current SAP HANA Appliance Scenarios . . . . . . . . . . . . . . . . . . 104 Network Switch Setup for Single Node with HA . . . . . . . . . . . . . This contains slots 11. . . . . . . . . . . . 32 License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 G8272 RackSwitch front view . . . . . . . . . . . 119 Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . .Three Site Approach . . . 114 File System Layout . . . . . . . . . . . . . . . . . 120 File System Layout of Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 SAP HANA Multiple Single Node Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Single Node HADR with Storage Expansion . . . . . . . . . . . 44 on x3950 X6 in an additional Storage Book . . . . . . . . . . . . . . .Single Node HADR . . . .architectural overview . . 20 G8264 RackSwitch front view . . . . . . . . . . Set IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 130 130 131 131 132 132 133 135 137 138 139 139 140 140 141 141 142 142 143 143 144 144 145 145 146 146 147 147 148 148 149 149 150 151 190 196 XI . . Newest virtual machine hardware version . . . . . . . . . . . . Set DNS suffix . . . . . . . . . . . . .GW . . . . . . . . . . . . . . . Choose disk storage for VM files . . . Choose SCSI Node . . . . . . . . . . . 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Upgrade virtual hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the USB . . . . . . X6 Implementation Guide 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample GRUB boot loader screen . . . . . . . . . . . . . . . . . . . . . . . . . . Create new HANA datastore . . . . . . . Create new virtual machine . . . Choose Network Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select IDE device 0:0 . The VFAT filesystems belong to device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Technical Documentation 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 login to ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . display network adapters . . . . . . . . . . . Choose number of CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose Memory . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .NETMASK. . . . . . . . . . . . . . . . . . . . . . . . ESXi 5. . . . . . . . . . . . . Confirm upgrade . . . . . . ESXi5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the autoyast parameter for installation . . . . . . . ESXi 5. . . . . .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . . . . configure management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Storage Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . display network adapters 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Upgrade virtual hardware . . . . . . . . . . . . . . . Choose a name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add a new CD/DVD device . . . . . . . . . . . . . . . . Choose datastore . . . . . . . . . . . . . . . . . . . Choose datastore size . . . Select ISO image . . . . . . . . . . . . . . . . . . . . . . . . .x filesystems on a System x3850 X6. . . . . . . . . . . . . . . . . Choose Operating System . . . . . . Finish creation of SLES ISO mount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of Backup/Restore Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure the use of more than 32 CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding kickstart parameter for install . . . . Set DNS and Hostname . . . . . . . . . . . . . . . . . . . . . .1 WEB Welcome . . . . . . . . . . . . . . Choose custom configuration . . . . . . . . . . Choose SCSI controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RAID array and RAID controller overview . . . . . . . . System x3950 X6 Single Node Four Socket Configurations with Storage Expansion System x3950 X6 SAP ERP on SAP HANA Single Node Configurations . . . . . x3950 X6 RAID Controller Configuration . . . . . . . . . . . . . . . . . . x3850 X6 RAID Controller Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. x3850 X6 Memory DIMM Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required Power UEFI settings . . . . . . . Card assignments for a four socket x3850 X6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM System x3950 X6 Single Node Four Socket Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 13 13 13 14 14 14 15 16 16 17 19 19 22 23 24 25 27 28 29 30 41 42 43 44 46 47 47 47 48 49 50 51 52 72 72 76 91 93 95 98 100 101 105 106 106 107 128 155 160 160 166 XII . . . . . . . . . . . . . . . . . . . . . . . . . System x3850 X6 Cluster Node Configurations with Storage Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Slots which may be used for additional NICs . . . . . . . . . . . . . . . . Single Node with HA OS Partitioning . . Numbering conventions . . . . . . . . . . . . . . . . . . . . . . . . . . Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System x3950 X6 Single Node Eight Socket Configurations . Software and Driver Levels . . . . . . . . . . . . . eX5 T-Shirt Size to X6 Model Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stanza file for X6 servers in eX5 clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single Node with HA OS Networking Setup . . . . . . . . . . . . . . . . . . G8124 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . DVD Part Numbers . . . . . . . . Supported Firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . eX5 T-Shirt Size to X6 Model Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Card placement for x3950 X6 four socket and eight socket . . . . x3950 X6 Memory DIMM Placement . . . . . . . . . G8264 RackSwitch port assignments . . . . . System x3850 X6 Single Node Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System x3950 X6 Cluster Node Configurations . . . . . . . . . . Partition Scheme for Single Nodes and Cluster Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation Process and Phases . . . . . . X6 Implementation Guide 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Technical Documentation List of Tables 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 Network Switch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G8052 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required Operation Modes UEFI settings . . . . . . . . . . . Network interface card assignments for an eight socket x3950 X6 . . . . . . . . . . . . . . . . . DVD/ISO Media Install Options . . . . . . . . Hostname Settings for DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boot options and boot loaders used . . . . . . . . . . . . . . . . . .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . . . . . . . . . . . . . . . . . . . Required Processors UEFI settings . . . . . . . . . . IBM System x3550 M4 GPFS quorum node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Card assignments for a two socket x3850 X6 . . . . . . . Customer infrastructure addresses . . . . . . . . . . . Stanza file for X6 servers in eX5 clusters . . . SAP HANA references . . . . . Stanza file for X6 servers in eX5 clusters . . . . Required Memory UEFI settings . G8272 RackSwitch port assignments . . . . . . . . . . . . . . . . . Extra Network Settings for DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GPFS Settings for DR Cluster . Single Node with HA Network Switch Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stanza file for X6 servers in eX5 clusters . . . . . . . . . . SAP HANA Virtual Machine Sizes by Lenovo . . . . . . . . . . . . . G8296 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example SLES backup fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 172 200 208 214 215 222 SSH login screen . . . . . . . . . . . . . . . . . Changing files for backup partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Required SAP HANA directories for restore Topology Vectors in a 8 node DR-cluster . . 2015 XIII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Support script usage . . . . . . . Example SUSE fstab entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example SLES primary fstab file . . . Lenovo MTM Mapping & Model Overview Lenovo MTM Mapping & Model Overview ServeRAID M5120 Firmware Issues . Example UEFI Configuration for Primary Partition Example GRUB Configuration for Primary Partition Example GRUB Configuration for Backup Partition Example rsync command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example Red Hat fstab entries . . . . . . . . . . . . . . . . . . . Example RHEL primary fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a copy of the motd file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example SLES primary fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 183 184 191 192 192 192 192 193 193 193 194 195 195 196 196 196 List of Listings 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 X6 Implementation Guide 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Support script output .Technical Documentation 53 54 55 56 57 58 59 Upgrade GPFS Portability Layer Checklist GPFS Upgrade Checklist . . Example rsync command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example RHEL backup fstab file . . . . . . . . . . . . . . . . . . . . . . Technical Documentation List of Abbreviations ASU Lenovo Advanced Settings Utility BIOS Basic Input / Output System DR Disaster Recovery (previously SAP Disaster Tolerance) DT SAP Dynamic Tiering (not to be confused with Disaster Recovery (DR).96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 XIV . previously Disaster Tolerance (DT)) ELILO EFI Linux Loader IBM GPFS IBM General Parallel File System GRUB Grand Unified Bootloader GSS GPFS Storage Server IMM Integrated Management Module LILO Linux Loader MTM Machine Type Model NIC Network Interface Controller OLAP On Line Analytical Processing OLTP On Line Transaction Processing OS Operating System RHEL Red Hat Enterprise Linux SAP HANA SAP HANA Platform Edition SLES SUSE Linux Enterprise Server SLES for SAP SUSE Linux Enterprise Server for SAP Applications UEFI Unified Extensible Firmware Interface UUID Universally Unique Identifier VLAG Virtual Link Aggregation Group VLAN Virtual Local Area Network X6 Implementation Guide 1.9. Technical Documentation 1 Abstract This document provides general information specific to the Lenovo Systems Solution for SAP HANA Platform Edition (short: Lenovo Solution). and that he has been instructed how to install the SAP HANA1 software on Lenovo Systems hardware. This document assumes that the reader understands the basic structure and components of the SAP HANA Platform Edition (SAP HANA) software. Edition Notice: 3rd July 2015 This is the published edition of this document. The Lenovo Systems servers with local storage and Lenovo Systems Networking switches will be used to run SAP HANA. All Rights Reserved. Lenovo makes no commitment to keep the information contained herein up to date. 2015 1 . 1 SAP HANA Platform Edition X6 Implementation Guide 1. Lenovo has created orderable models upon which you may install and run the SAP HANA according to the sizing charts coordinated with SAP AG. storage and switches have been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA. Attention IMPORTANT: Please do not attempt to install a system without having been instructed about the content of this document. that he has a solid understanding of Linux administration processes. Lenovo Solution is built with Lenovo Systems hardware based on Intel Xeon Architecture as building blocks for a scale-up or scale-out SAP HANA system. without the prior consent of Lenovo.9. These provide a highly-scalable infrastructure for SAP HANA.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The information contained in this document is subject to change without any notice. special ordering options for the Lenovo System servers. Operating System & GPFS Operations Guide (SAP Note 1650046). Lenovo makes no warranties or representations with respect to the content hereof and specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. Note It is considered best practice to create backups before and recover the SAP HANA system after a major failure instead of relying on a fresh install with the help of this document. Lenovo assumes no responsibility for any errors that may appear in this document. For details on Backup and Recovery please refer to the Lenovo Solution Backup & Restore Guide as well as the Lenovo Solution Hardware. © Copyright 2014-2015 Lenovo. Lenovo reserves the right to make any such changes without obligation to notify any person of such revision or changes. Neither this documentation nor any part of it may be copied or reproduced in any form or by any means or translated into another language. The online copy is the master. For each workload type. Walldorf. Lenovo Development for SAP Solutions. Lenovo Development for SAP Solutions. Germany • Hans-Peter Droste. 1. specifically: • Abdelkader Sellami. Lenovo Development for SAP Solutions. Germany And many people at SAP Development. Germany • Henning Sackewitz. Lenovo Systems Lab Services.sap. SAP Development.9. The major products installed here are SAP HANA. Germany • Christoph Nelles. Lenovo Development for SAp Solutions. Germany 2 http://help. Lenovo Development for SAP Solutions. Germany • Detlev Freund. Lenovo Development for SAP Solutions. SAP HANA Development. Germany • Patrick Hartman. or Red Hat Enterprise Linux (RHEL). Germany • Richard Ott. Germany • Nils König.1 Preface & Scope The objective of this paper is to document the installation and configuration of the SAP HANA Platform Edition (SAP HANA) on System x hardware using a managed set up rather than manually installing each node from scratch. 2015 2 .2 Acknowledgements The authors of this document are: • Martin Bachmaier. For instructions how to administrate SAP HANA Platform Edition (SAP HANA) please refer to the SAP HANA Technical Operations Manual2 . Germany • Florian Bausch.com/hana_platform X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Germany • Volker Pense. Walldorf. Germany • Alexander Trefs. SAP HANA Development. Lenovo Development for SAP Solutions. Germany • Adolf Brosig. IBM GTS. Walldorf. Lenovo Development for SAP Solutions. Walldorf. Germany • Michael Reumann. Lenovo Technical Sales. US • Thorsten Nitsch. Walldorf. Lenovo Development for SAP Solutions. SAP HANA Support. IBM General Parallel File System (IBM GPFS) and the operating systems SUSE Linux Enterprise Server for SAP Applications (SLES for SAP). Lenovo Development for SAP Solutions. Operating System & GPFS Operations Guide. Germany • Keith Frisby. Germany • Oliver Rettig. Lenovo Development for SAP Solutions. Germany The authors would like to thank the following Lenovo and IBM colleagues: • Herbert Diether. Germany. Germany • Guido Kampe. Instructions how to administrate and maintain the other components delivered with the System x solution can be found in the SAP Note 1650046 – Lenovo Systems Solution Hardware. Germany • Helmut Cossmann. Lenovo Development for SAP Solutions.Technical Documentation 1. The Lenovo System x solution for SAP HANA Quick Start Guide provides an overview of the complete solution and instructions how to find service and support for your Lenovo Solution. Lenovo Development for SAP Solutions. 9. Such commitments are only made in Lenovo product announcements.com. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the specific Statement of Direction. it is the customers’ responsibility to upgrade (or downgrade) to the recommended levels as instructed by System x support representatives. 1. If the customer would open an Lenovo support ticket for the system. SAP HANA Development. Please contact the sapsolutions@lenovo. A list of the minimally required versions can be found in SAP Note 1880960 – Lenovo Systems Solution for SAP HANA Platform Edition FW/OS/Driver Maintenance. depending on its version. operating systems. SAP HANA Support Development.com to get enrolled for education prior to installing an Lenovo Solution appliance. The information contained in this document has not been submitted to any formal test and is distributed AS IS. Only by following this path. Although this may be contrary to standard Lenovo Support processes. Germany • Oliver Rebholz.4 Disclaimer This document is subject to change without notification and will not cover the issues encountered in every customer situation.3 Feedback We are interested in your comments and feedback. These images have dependencies regarding the hardware. This will void the warranty and support of said machine.5 Support The System x SAP HANA development team provides new images for the SAP HANA appliance at regular intervals.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The use of the latest image for maintenance and installation of SAP HANA appliance is highly recommended. X6 Implementation Guide 1. and represent goals and objectives only. from following community (SAP HANA Support Document section) – SAP Solutions at Lenovo Community. In case of issues with the SAP HANA appliance. If identified as a hardware or file system issue. Walldorf. we will ask you to restrain from trying to apply what is described herein – you could void the preloaded system installation – and void the SAP certified configuration. Germany 1. If you are not familiar with the described system. Please send it to sapsolutions@lenovo. he might be requested to perform system upgrades to firmware or software to the latest available levels which might not be supported with the SAP HANA appliance. and hardware drivers. All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice. Whenever the firmware level recommendations (fixes known firmware issues) for the Lenovo components of the SAP HANA appliance are given by the individual System x support representatives. the ticket will be forwarded to the Lenovo support team and handled appropriately. it is the approved and accepted support process for all SAP Appliances including the SAP HANA appliance. Walldorf. Some information addresses anticipated future capabilities. can we ensure the proper configuration of the Lenovo Solution. the customer is asked to open a SAP Help Desk request (OSS ticket) first and foremost. Such information is not intended as a definitive statement of a commitment to specific levels of performance. 2015 3 . function or delivery schedules with respect to any future products. This document is for educated service personnel only. The information is presented here to communicate Lenovo’s current investment and development activities as a good faith effort to help with our customers’ future planning. 1. It should be used only in conjunction with the official product literature. The full guidebook can be downloaded.Technical Documentation • Michael Becker. IBM.sap.5 and 6. it is the customers’ responsibility to upgrade (or to downgrade) to the recommended levels as instructed by Lenovo support representatives.sap. Whenever other hardware or software recommendations (that fix known issues) for components of the SAP HANA appliance are given by the individual Lenovo support representatives.1. there are fixes that IBM/Lenovo recommends to install that are not listed here.5.com/hana/SAP_HANA_Technical_Operations_Manual_en. including updating of the operating system components in SAP Note 1599888 – SAP HANA: Operational Concept. This knowledge was derived from internal testing. novell. To check for updates. you have to recompile IBM GPFS software as well.0.0 and IBM Spectrum Scale/GPFS 4. It is important to understand that the corrections listed in this note are those known to be a solution to a definite problem when running SAP HANA appliance on the System x solutions.com/patch/finder/ • Red Hat Enterprise Linux 6. In parallel. or IBM/Lenovo support representatives. If the Linux kernel is updated. or customers who ran into a specific problem.6 – You can download the installation package from the Red Hat website at http://www. • Firmware and drivers for System X6 Servers – You can obtain updates for System x3850/x3950 X6 servers on the IBM support website (Fix Central) at http://www. go to the following websites. you can download them from the respective Lenovo.com/Download?buildid=XL0RqEykZpc~ • SUSE Linux patches and updates – You can obtain the latest code updates for SUSE from the SUSE website at http://download.com/swdc Lenovo recommends that customers follow the software upgrade recommendations set out by SAP in the SAP HANA Technical Operations Manual4 (TOM).96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. it is the customers’ responsibility to upgrade (or downgrade) to the recommended levels as instructed by SAP through an explicit SAP Note or a Customer OSS Message.Technical Documentation Whenever the operating systems recommendations (fixes known operating systems issues) for the SUSE Linux components of the SAP HANA appliance are given by the SAP. SAP describes their operational concept. Follow the procedure in the included documentation to update the software. • IBM General Parallel File System (IBM GPFS3 ) and IBM Spectrum Scale updates – You can obtain updates for GPFS on the IBM support website for GPFS 3. com/en/technologies/linux-platforms/enterprise-linux • VMware ESX Server patches and updates – You can obtain the latest code updates for vSphere ESX server from the VMware website at http://www.9. yet are recommend to be applied.1 • SUSE Linux Enterprise Server for SAP Applications 11 SP3 – You can download the installation package from the SUSE website at http://download.1. novell. If software and documentation updates are available. nevertheless. the organizations owning the individual products provide a lot more fixes that are unknown to the Lenovo-SAP team. It is expected 3 IBM General Parallel File System 4 http://help.ibm.com/support/fixcentral using the the ’Find product’ tab. GPFS 4. In particular. SUSE or SAP website.pdf X6 Implementation Guide 1.redhat.com/support/ • SAP HANA appliance updates – You can obtain the latest code updates from SAP at the SAP Service Marketplace at http: //service. 2015 4 .vmware. SUSE. 96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9.Technical Documentation that you contact your IBM/Lenovo service contact to get a list of those fixes as well as a reasonably current service level in general. 2015 5 . X6 Implementation Guide 1. 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.First release on IBM/Lenovo Systems X6 hardware 1.1 Icons Used The following information boxes indicate important information you should follow according to the level of importance.x SAP HANA Platform Edition v 1.Technical Documentation 2 Introduction 2.0 SPS08 1.0 SPS09 2.6 • IBM GPFS 3.1 Purpose This document is intended to provide a single point of reference for techniques and product behaviors when dealing with SAP HANA.2 Applicability The techniques and product behaviours outlined in this document apply to: • SAP HANA appliance Platform Edition v1. 2015 6 . 2.4.0 • SLES for SAP5 11 SP3 • RHEL6 6.3 Exclusions and Exceptions The techniques and product behaviours outlined in this document may not be applicable to future releases.5 and 4.5 and 6.8.0 SPS07 . 2.1 • Lenovo Systems solution for SAP HANA appliance based on the: – System x3850/x3950 X6 Workload Optimized Server 2. Attention ATTENTION – pay close attention to the instructions given 5 SUSE 6 Red Linux Enterprise Server for SAP Applications Hat Enterprise Linux X6 Implementation Guide 1.9. 2.2.1 SAP HANA Platform Edition Versions In this document. we reference to several different versions of the Lenovo Solution guided installation software.7.x SAP HANA Platform Edition v 1.4 Conventions This guide uses several conventions to improve the reader’s experience and the ease of understanding. The following numbering refers to the corresponding SAP HANA Platform Edition version.x SAP HANA Platform Edition v 1.9. 2 Definition of SAP HANA The following picture defines the current SAP HANA scenarios that can be leveraged through the System x solution for the SAP HANA Platform Edition.→you will see an automatic line break. This hardware may not be configured from individual parts. therefore ←. Lenovo has created several system models upon which you may install and run SAP HANA according to the sizing charts coordinated with SAP. 2015 7 . rather it is to be ordered and delivered as a single unit using an Lenovo manufacturer type/model number specified later. This line break is indicated by an arrow at the end of the first and an arrow at the start of the second line: 1 This is a code snippet that is too long to be printed in one single line. Code examples that contain commands that have to be executed on a command line follow these rules: • Lines beginning with a # indicate commands to be executed by the root user. 3. There are also line numbers at the left side of each code snippet to improve the readability. that will be used to run SAP HANA.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. This document assumes that the reader understands the basic structure and components of the SAP HANA Platform Edition. such as IBM GPFS.2 Code Snippets When reading code snippets you have to note the following: Lines of code that are too long to be shown in one line will be automatically broken. 3.4. • Lines beginning with a $ indicate commands to be executed by an arbitrary user.Technical Documentation Warning WARNING – this is something to take into consideration Note INFORMATION – extra information describing in detail 2. X6 Implementation Guide 1.1 The SAP HANA Appliance Software The Lenovo Solution is based on building blocks that provide a highly scalable infrastructure for SAP HANA based on the System x architecture: x3850/x3950 X6 as well as software.9. 3 Solution Overview This document provides general information specific to the Lenovo Solution. For each workload type a special System x type/model has been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition. SAP HANA should be installed on hardware that has been specifically certified for SAP HANA by SAP. 0 Figure 1: Current SAP HANA Appliance Scenarios X6 Implementation Guide 1.0 SAP HANA DB Appliance 1. SRM.0 SPS 05 SAP HANA 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.0 SAP HANA Data Mart SAP HANA DB Appliance 1.9.Technical Documentation Corporate Business Intelligence (BI) SAP Business Warehouse SAP HANA DB Appliance 1.SCM) Data Mart SAP HANA 1.0 SPS 05 SAP HANA 1.0 SPS 05 SAP ERP n … (CRM.0 SPS 03 Local BI SAP HANA Local BI SAP ERP (CRM SRM.SCM) SAP HANA Customer Application Data Mart SAP HANA DB Appliance 1. 2015 8 . 5" HDD RAID510 • Up to 4×400GB SSD for LSI CacheCade System x3950 X6 Workload Optimized Server for SAP HANA (Figure 2c) • 4×–8×Intel Xeon E7-8880v2/v3 and other Family of Processors (refer to System x3850 X6) • 512GB–6TB DDR3 Memory 7 For improved performance. Models created manually will neither be supported by Lenovo nor SAP due to the high-performance criteria set out by SAP during certification. 2015 9 .8 or E7-8880v39 Family of Processors • 128-2048GB DDR3 Memory • Internal Storage: – 6×1.2TB 2. 9 E7-8890v3 (for improved performance) or E7-8880Lv3 (for improved efficiency) are supported as optional feature 10 RAID6 optional 8 For X6 Implementation Guide 1.5" HDD for RAID1 and RAID5 – 2×400GB SSD for LSI CacheCade • One (1) External Storage (EXP2524) for systems > 512GB (stand-alone configurations) or ≥ 512GB (cluster configurations) • 2 ×Dual-Port 10GbE NICs • 1 ×Quad-Port 1GigE NICs • IBM General Parallel File System • Certified for SLES for SAP OS and SAP HANA appliance software Optional System Storage EXP2524 (Figure 2b) • Up to 20×1. The SAP HANA appliance software must be installed only on a certified and tested hardware configuration based on one of these two models. customers who confirm that an upgrade to an 8 socket system will never be desired. the Intel processors E7-4880v2 or E7-4890v2 will also be supported as optional alternate features.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.2TB 2.9. A customer needs only to choose the model and the extra options to fulfill their requirements. E7-8890v2 is supported as an optional feature. Lenovo provides a model/type number for four (4) socket and eight (8) socket systems that are to be setup for each certified model by SAP.Technical Documentation 4 Hardware Configurations The System X6 Workload Optimized servers for SAP HANA are based upon two building blocks that can be used to fulfill the hardware requirements for SAP HANA. (a) System x3850 X6 (b) System Storage EXP2524 (c) System x3950 X6 Figure 2: Hardware Overview System x3850 X6 Workload Optimized Server for SAP HANA (Figure 2a) • 2×–4×Intel Xeon E7-8880v27. E7-8880v2.5" HDD for RAID1 and RAID5 – 4×400GB SSD for LSI CacheCade • One (1) External Storage (EXP2524) for systems ≥ 3TB (stand-alone configurations) or > 1024GB (cluster configurations) • 2 ×Dual-Port 10GbE NICs • 1 ×Quad-Port 1GigE NICs • IBM General Parallel File System • Certified for SLES for SAP OS and SAP HANA appliance software 4.Technical Documentation • Internal Storage: – 12×1. 2015 10 . E7-8890v2 or Intel Xeon Haswell EX E7-8880v3.2TB 2. 4. These should be installed as single servers. SAP ERP Clients (Prod) Server 1 (Production) Production) SAP ERP Clients (Test) Server 2 (Test) SAP ERP Clients (Dev) Server 3 (Development) SAP HANA database SAP HANA database SAP HANA database GPFS GPFS GPFS Internal Internal storage storage Internal storage Internal storage Figure 3: SAP HANA Multiple Single Node Example X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. E7-4890v2. E7-8890v3 processor family.2 Single Node versus Clustered Configuration The Systems X6 Solution servers can be configured in two ways: 1. As a single node configuration with separate.9. development). test. independent HANA installations (example: production.1 SAP HANA Platform Edition T-Shirt Sizes Lenovo and SAP have certified a set of configurations to be used with the SAP HANA Platform Edition that are based on the Intel Xeon IvyBridge EX E7-4880v2. These servers all have individual GPFS clusters that are independent from each other. E78880Lv3. All servers (nodes) form one GFPS cluster. extra hardware such as network switches and adapters need to be purchased in addition to the clustered appliances. All server (nodes) form one HANA cluster.2. 2015 11 . These should be installed as clustered servers.1 Network Switch Options For clustered configurations. As a clustered configuration with a distributed HANA instance across servers. the supported network switches for the Lenovo Workload Optimized server in a clustered configuration are: X6 Implementation Guide 1. These servers will need to be configured different from a single node system and are therefore defined here explicitly. Currently.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9. What is meant is the use of multiple single Lenovo workload optimized servers connected via one or more configuration specific network switches in such a way that all servers act as one single high performance SAP HANA instance. 4.Technical Documentation 2. Further documentation will differentiate between non-clustered (single or consolidated) and clustered installations. Clients SAP BW SAP ERP SAP HANA Cluster Server 1 Server 2 Server 3 SAP HANA Database SAP HANA Database SAP HANA Database Master node Worker node GPFS Primary node GPFS Secondary node Additional node Internal Storage Internal Storage Internal Storage SAP HANA Data&Log SAP HANA Data&Log SAP HANA Data&Log GPFS Standby node Backup/Recovery SAN Storage SAN storage GPFS SAN storage SAN storage SAN storage Cluster Figure 4: SAP HANA Clustered Example with Backup The term scale-out or cluster is used interchangeably in this document. 9.com for any update. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. so please contact [email protected] Documentation Network 10Gb Ethernet 1Gb Ethernet Description RackSwitch RackSwitch RackSwitch RackSwitch RackSwitch RackSwitch RackSwitch RackSwitch RackSwitch RackSwitch G8296 (Rear-to-Front) G8296 (Front-to-Rear) G8272 (Rear-to-Front) G8272 (Front-to-Rear) G8264 (Rear-to-Front) G8264 (Front-to-Rear) G8124E (Rear-to-Front) G8124E (Front-to-Rear) G8052 (Rear-to-Front) G8052 (Front-to-Rear) Part Number 7159GR6 7159GF5 7159CRW 7159CFV 7159G64 715964F 7159BR6 7159BF7 7159G52 715952F Table 1: Network Switch Options Note These configurations may change over time. 2015 12 . 2TB HDD 4×400GB SSD 1 ×M5210 2 ×M5210 3.Technical Documentation 4.2TB HDD 4×400GB SSD 1 ×M5210 2 ×M5210 3.6 TB RAID5 for SAP HANA data/log SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout Network 512 768 Table 3: IBM System x3950 X6 Single Node Four Socket Configurations SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout Network 512 1024 1536 2048 x3950 X6 6241–AC4 8 ×Intel Xeon® E7-8880v2/v3 512GB 1TB 1.3.2TB HDD 2×400GB SSD 12×1.6 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE Table 4: System x3950 X6 Single Node Eight Socket Configurations X6 Implementation Guide 1.1 System x3850 X6 Single Node Configurations 128 SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout 256 512 256 512 x3850 X6 6241–AC3 2 ×Intel Xeon® E7-8880v2/v3 4 ×Intel Xeon® E7-8880v2/v3 256GB 384GB 512GB 256GB 512GB 6×1.6 TB RAID5 for 9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. please see the E: Lenovo X6 Server MTM List & Model Overview on page 214 for more details. 2015 13 .5TB 2TB 6×1.3.2TB HDD 2×400GB SSD 12×1.2 System x3950 X6 Single Node Configurations 256 1024 1536 2048 x3950 X6 6241–AC4 4 ×Intel Xeon® E7-8880v2/v3 256GB 512GB 768GB 1024GB 1536GB 2048GB 6×1.9.6 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE 128GB Network 384 Table 2: System x3850 X6 Single Node Configurations 4.2TB HDD 2×400GB SSD 1 ×M5210 3. 4.3 SAP HANA Optimized Hardware Configurations SEO models exist for certain configurations. 5TB 2TB 15×1.Technical Documentation 4. 2015 14 .2TB HDD & 8×400GB SSD 2 ×M5210 & 1 ×M5120/M5225 19.2 TB RAID5 for SAP HANA data/log 28.2 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE Table 5: System x3950 X6 Single Node Four Socket Configurations with Storage Expansion * For Suite on HANA only.2TB HDD & 6×400GB SSD 30×1.9. not Datamart and BW 4.5 System x3850 X6 Cluster Node Configurations with Storage Expansion SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout Network 256 512 1024 x3850 X6 6241–AC3 2 ×Intel Xeon® E7-8880v2 4 ×Intel Xeon® E7-8880v2/v3 256GB 512GB 1TB 15×1.3.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout Network 768 768GB 1024 1536* 2048* x3850 X6 6241–AC3 4 ×Intel Xeon® E7-8880v2/v3 1TB 1.8 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE Table 6: System x3950 X6 SAP ERP on SAP HANA Single Node Configurations 4.3.2TB HDD & 4×400GB SSD 1 ×M5210 & 1 ×M5120/M5225 13.2TB HDD & 4×400GB SSD 1 ×M5210 & 1 ×M5120/M5225 13.2 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE Table 7: System x3850 X6 Cluster Node Configurations with Storage Expansion X6 Implementation Guide 1.4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout Network 3TB 4TB 6TB x3950 X6 6241–AC4 8 ×Intel Xeon® E7-8880v2/v3 3TB 4TB 6TB 21×1.3. depending on your machine type and configuration.2TB HDD & 4×400GB SSD & 6×400GB SSD 2 ×M5210 2 ×M5210 & 1 ×M5120/M5225 9.2 TB RAID5 for SAP HANA data/log for SAP HANA data/log 2 × Dual-Port 10GbE 1 × Quad-Port 1GigE Table 8: System x3950 X6 Cluster Node Configurations 4. 4. With QSA adapters the QSFP ports support SFP+ transceivers for 10GbE connectivity. figure 6 and table 11 on page 17 regarding four socket machines and figure 8 and table 12 on page 19.4.4 Card Placement Attention You need to make sure. Intel I-340 PCI cards is available optionally. there is a different card placement. if more 1GbE ports are needed.9. X6 Implementation Guide 1. The Storage Books are accessible from the front. Concerning the numbering of the slots please note that PCI slots 11 and 12 are located in the Storage Book. In a x3950 X6 an additional I-350 card can be placed in slot 42. the PCI slots shown in table 9: Slots which may be used for additional NICs on page 16 may be used for additional NICs. Depending on having two. four or eight socket machines. Please refer to figure 5 and table 10 two socket machines.6 TB RAID5 19.2 Slots for additional Network Interface Cards If the customer needs more network ports.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 15 .Technical Documentation 4. A x3950 X6 machine has an additional Storage Book containing PCI slots 43 and 44. see figure 7. This step must be done before the installation. that only with the correct card layout your machine is supported by Lenovo.3. A quad port Intel I-350 provides four 1GbE ports and is placed in slot 10. Please refer to the tables below for the assignment regarding in which slot a certain card should be. Please see the tables and figures below regarding the assignment regarding in which slot a certain card should be.1 Network Interface Cards The x3850 X6 machine comes with two Mellanox Connect X-3 10GbE adapters that provide two 10GbE ports or two Mellanox ConnectX-3 FDR IB VPI adapters that provide two QSFP ports.2TB HDD 21×1.4.6 System x3950 X6 Cluster Node Configurations SAP Models Product Type/Model CPU Memory Disk Controller Disk Layout Network 512 1024 1024 x3950 X6 6241–AC4 2048 4 ×Intel Xeon® E7-8880v2/v3 8 ×Intel Xeon® E7-8880v2/v3 512GB 1TB 1TB 2TB 12×1. Please be aware. 4. that the cards are placed in the correct PCI slot. Technical Documentation Machine x3850 X6 two sockets x3850 X6 four sockets x3950 X6 four sockets x3950 X6 eight sockets PCI Slots 9, 10 2, 3, 5, 6, 10 9, 10, 41, 42 5, 6, 10, 37, 38, 42 Table 9: Slots which may be used for additional NICs 4.4.3 RAID Adapter Cards The internal RAID adapter is a ServeRAID M5210 which resides in slot 12 in the Storage Book. Regarding the x3950 X6, there are two internal RAID adapter used, residing in slot 12 and 44. The first external RAID adapter (ServeRAID M5120 or M5225) in a x3850 X6 will be placed in slot 8, the second in slot 7 and then slot 9 for the third. Regarding a x3950 X6 machine, placement will start in slot 40, then 39, then 41 and finally 7 and 8, refer to table 13 for details. Card Port Label Slot ServeRAID M5210 (internal) – E F G H – – – – A B C D 12 I – Intel I-350 1GbE quad port Intel I-340 1GbE quad port * Mellanox ConnectX-3 (10GbE or FDR IB VPI) Mellanox ConnectX-3 (10GbE or FDR IB VPI) 100MbE internal Ethernet Adapter for System Management via the IMM 10 9 8 7 Ethernet Device – eth4 eth5 eth6 eth7 eth8 eth9 eth10 eth11 eth0 eth1 eth2 eth3 – Table 10: Card assignments for a two socket x3850 X6 * This card is optional X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 16 Technical Documentation Figure 5: Workload Optimized System x3850 X6 2 Socket Rear View Card Port Label Slot ServeRAID M5210 (internal) – E F G H – – – – – – – C D A B 12 I – Intel I-350 1GbE quad port ServeRAID M5120/M5225 (external) * ServeRAID M5120/M5225 (external) * ServeRAID M5120/M5225 (external) * Intel I-340 1GbE quad port * Mellanox ConnectX-3 (10GbE or FDR IB VPI) Mellanox ConnectX-3 (10GbE or FDR IB VPI) 100MbE internal Ethernet Adapter for System Management via the IMM 10 9 8 7 5 4 1 Ethernet Device – eth4 eth5 eth6 eth7 – – – eth8 eth9 eth10 eth11 eth2 eth3 eth0 eth1 – Table 11: Card assignments for a four socket x3850 X6 * This cards are only used in certain configurations, please refer to section 4.4.3 for details X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 17 Technical Documentation Figure 6: Workload Optimized System x3850 X6 4 Socket Rear View Figure 7: Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on x3950 X6 in an additional Storage Book X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 18 Technical Documentation Card Port Label Intel I-350 1GbE quad port Mellanox ConnectX-3 (10GbE or FDR IB VPI) Mellanox ConnectX-3 (10GbE or FDR IB VPI) 100MbE internal Ethernet Adapter for System Management via the IMM 100MbE internal Ethernet Adapter for System Management via the IMM E F G H C D A B 10 36 4 Intel I-340 1GbE quad port * Ethernet Device eth4 eth5 eth6 eth7 eth2 eth3 eth0 eth1 I – – J – – K L M N – – – – Intel I-350 1GbE quad port * Slot 42 5 e.g. e.g. e.g. e.g. e.g. e.g. e.g. e.g. eth8 eth9 eth10 eth11 eth8 eth9 eth10 eth11 Table 12: Network interface card assignments for an eight socket x3950 X6 * This cards is optional, please refer to table 13 for details 4 processors 512GB 4S 1TB 4S MLNX MLNX 10 12 36 I350 I350 M5210 39 MLNX MLNX Slot 4 7 8 processors 1TB 2TB MLNX MLNX 4TB MLNX 6TB* MLNX I350 M5210 MLNX S/C M5120/ M5225 I350 M5210 MLNX S/C M5120/ M5225 C M5120/ M5225 C M5120/ M5225 I350 M5210 C M5120/ M5225 I350 M5210 8 C M5120/ M5225 40 I350 M5210 MLNX C M5120/ M5225 I350 M5210 MLNX C M5120/ M5225 41 42 44 I350 M5210 I350 M5210 I350 M5210 I350 M5210 12TB* MLNX S/C M5120/ M5225 S/C M5120/ M5225 I350 M5210 MLNX C M5120/ M5225 S/C M5120/ M5225 C M5120/ M5225 I350 M5210 Table 13: Card placement for x3950 X6 four socket and eight socket X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 19 96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation Figure 8: Workload Optimized System x3950 X6 8 Socket Rear View X6 Implementation Guide 1. 2015 20 .9. console web access and SSH access It is necessary to separate the IBM GPFS and SAP HANA internal networks from all other networks as well as from each other.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.g. Microsoft Excel. 2015 21 . e. SAP HANA performance may be compromised and the system is not supported by SAP nor Lenovo. the Integrated Management Module (IMM) and the corresponding switches should be set up and integrated into the customer network environment according to the customer’s requirements and the recommendations from SAP. The standard MTU is 1500. VNC access. 10GbE) with separate physical private LANs for the internal communication of GPFS and SAP HANA. etc.g. 5. for SAP Client/BW and SAP management communication should be separated as well. If not. the network setup will become more complicated. – Server data management tools for: ∗ System/DB backup and restore operations – Logical server application management (can be partially accomplished via Integrated Management Module) ∗ SSH access. Jumbo frames are Ethernet frames with a Maximum Transmission Unit (MTU) up to 9000 bytes.2 Jumbo Frames It is possible and allowed to activate the usage of so-called jumbo frames for the HANA and GPFS networks. • SAP HANA client access • Server data management • Server application management Additionally to the SAP workloads the Lenovo Solution defines two additional workloads: • IBM clustered files system communications for GPFS • Physical server management via the Integrated Management Module – Hardware support. SAP currently recommends that individual workloads are separated by either physical or virtual LAN addresses or subnets.1 Networking Requirements The networking for the Lenovo Solution.Technical Documentation 5 Networking 5. In addition external networks. If not. The individual workloads described by SAP are: • SAP HANA internal communication via SAP HANA private networking • Customer access to the SAP HANA appliance via: – SAP Landscape Transformation Replication (LT) – Sybase Replication (SR) – SAP Business Objects Data Services (DS) – Business Objects XI. X6 Implementation Guide 1. Servers being configured in a clustered scenario require two dedicated high speed NICs (e.9. SAP Support access We strongly recommend that the following SAP Workloads are dedicated and distinct subnets using separate Ethernet adapters (NICs). if all network components (for example networking adapters and switches) that have to process these jumbo frames support the usage. IP Address Default Network Prefix Default Netmask Default Gateway Primary DNS IP Secondary DNS IP Domain Search NTP Server b b b b b b b b Table 14: Customer infrastructure addresses 11 Disaster Recovery (previously SAP Disaster Tolerance) X6 Implementation Guide 1. G8272. You may have to deactivate the usage of jumbo frames in certain scenarios. therefore it is recommended to not use jumbo frames in these setups. please gather the following network information from your network administrator where indicated with the b symbol. Note In case the customer plans to install a single node configuration. jumbo frames cause the loss of network connectivity. To change this behaviour. This can be done like the following: • SUSE: in the YaST module for networking: General tab in the configuration of the network device/bond • Red Hat: changing the MTU size in the file /etc/sysconfig/network-scripts/ifcfg-* of the interface/bond Warning Jumbo frames are activated during the installation phase for bond0 and bond1. If erroneously activated. In a standard cluster setup jumbo frames can be activated. This can lead to a better network performance on the HANA and GPFS networks.Technical Documentation The advantage of jumbo frames is less overhead for headers and checksum computation. 2015 22 . In DR11 or High Availability setups the HANA and GPFS networks may communicate via non-Lenovo customer switches that cannot handle jumbo frames.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The switches G8264. Please use only IPv4 addresses.9.3 Network Configuration Before you configure the server and install the Lenovo Solution. you have to change the MTU size. but would like to scale it out to a cluster by adding more severs: plan the network configuration for the GPFS and HANA networks as if the cluster would be already existing to simplify a later scale out. 5. Attention Jumbo frames can only be used. G8296 and G8124E are certified for the usage in the Lenovo Solution appliance with jumbo frames. 101 (exany B/D (mandatory) ample) b b Any of the remaining NIC b b ports b b I Server Node 02 (Worker/Stand-By) Netmask Gateway 255.2. Warning When connecting the data replication network directly to the internal 10GigE network.102 (example) gpfsnode02 (mandatory) 255.0 (recommended) b None (recommended) b b b b any A/C 127.102 (example) hananode02 (mandatory) 255.24) from the customer network.168..1 (default) gpfsnode01 192. the internal networks of the appliance for GPFS and HANA are set up with redundant links.255.20. client network) as well.n. The data replication connection to the primary data source can also be set up in a redundant fashion and connects directly to the appliance internal 10GigE HANA network.1 (default) hananode01 192. It is recommended to establish redundant network connections for the other networks (e. Both switches are connected with a minimum of two ISL ports.168. If a network adapter or one of the switches fail.255. . .10. The details for this setup depend strongly on the customers network infrastructure and need to be planned accordingly.168.0 (recommended) None (recommended) any B/D 127. Details to the exact configuration can be found in chapter 5.g.20.255.0 (recommended) None (recommended) .1.0.6.0.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1 (default) 192.255.255. These connect to redundant G8264.2.9. an ACL needs to be configured on the uplink port to isolate the internal networks (e. G8272.255.255.1 (default) 192.Technical Documentation Network IBM GPFS Private Network (predefined) SAP HANA Private Network (predefined) Customer Network IMM IBM GPFS Private Network (predefined) SAP HANA Private Network (predefined) Port Label Single Cluster IP Address Hostname Server Node 01 (Worker/Stand-By/Single) 127.168.10.0. 127. 2015 23 . This setup is similar to the internal networks and requires two identical 1GigE or 10GigE switches X6 Implementation Guide 1.255. G8296 or G8124E 10GigE switches.0. for all other nodes .101 (exany A/C (mandatory) ample) b b 127. the SAP HANA network and the GPFS network are taken over by the remaining switch and network adapter.4 Network Switch Configuration For Clustered Installations In a clustered configuration with high availability.1. It is recommended to use the 40GbE ports for the ISLs.g.0. On host side the two corresponding ports of each network are configured as Linux bond devices.0 (recommended) b None (recommended) 255. Table 15: IP address configuration 5..7: Network Configurations in a Clustered Environment on page 30. 0/24 HANA 200(++) port#+1000 LACP-Key 192. To implement network redundancy on the switch level.g. 5. nor for the configuration in the cluster.168. Please ensure the proper IP address setup on the Lenovo Solution server.2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit This option is defined to use the G8264 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA. ** VLAN 4092 is a suggestion for the management VLAN. More details can be found in Chapter 5.6. A VLAG requires a dedicated inter-switch link (ISL) for synchronization. G8052 1GigE or G8264 10GigE).6. HANA or IMM LANs. Each of the networks will then connect to one of the two switches.20.168.30.6.g.Technical Documentation (e. This guide does not go into detail regarding the customers switch configuration. a Virtual Link Aggregation Group (VLAG) needs to be created on the two network switches.6 5.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. MGMT MGMT MGMT MGMT MGMT (G8264) (G8272) (G8296) (G8124) (G8052) Table 16: Numbering conventions Note The "(++)" in the table above indicates that +1 should be added for every new network in case of multiple GPFS. VLAN LACP-Key VLAG-Key Tier-ID Network 128 4095* 128 4095* 128 4095* 128 4095* 128 4092** ISL 4094 VLAN+1000 LACP Key 10 GPFS 100(++) port#+1000 LACP-Key 192. This allows up to 24 Lenovo Solution servers (or 26 servers with "40G -> 4x 10G" breakout cable on ports 9 or 13) to be connected. "RackSwitch G8272 Application Guide").5 Customer Site Networks We allow the customer to define and use their own networks and connect them to the dedicated customer network NICs using their own switch infrastructure. 2015 24 .9. 5. Note For more details on VLAGs please obtain the Application Guide respective to the RackSwitch model and N/OS you have installed and consult chapter "Virtual Link Aggregation Groups" (e.0/24 * VLAN 4095 is internally assigned to the management port(s) and cannot be changed.1 Network Definitions Numbering conventions Network IP-Interf.7: Network Configurations in a Clustered Environment on page 30. As long as there is one redundant path to each server the remaining appliance and data management networks can be implemented with a single link.10.0/24 IMM (BMC) 300(++) port#+1000 LACP-Key 192. 5.168. The setup is as follows: X6 Implementation Guide 1. Switch g8264-1 g8264-1 g8264-1 g8264-1 g8264-1 .124 192..----------------------. .64 (HANA)/ \___Port 1 bonded ISL .. but it should be used consistently as the internal (private) network within this guide.5_____ MGMT| G8264 Switch |1_____\__ Inter-Switch 40Gb Link (ISL) ‘----------------------’ \_\_____Port 5 bonded ISL 17.168.64 (HANA) .168.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. . IP Address <customer-mgmt IP1> 192.27. If the customer wants to use a different IP range he may do so.. Hostname <switch1> gpfsnode01 hananode01 gpfsnode02 hananode02 .24.25.124 <customer-mgmt IP2> 192.. 192.24.168.10.124 gpfsnode24 hananode24 bond0 bond1 Table 17: G8264 RackSwitch port assignments Note There is no public network attached to these switches.168. .20.22.10.168.168.0/24 and the SAP HANA network to be used as 192.102 192.... Port MGMT 17 18 19 20 .Technical Documentation 18. g8264-2 g8264-2 63 64 100 200 192. . VLAN 4095 100 200 100 200 . g8264-1 g8264-1 g8264-2 g8264-2 g8264-2 g8264-2 g8264-2 .21.168.101 192. gpfsnode24 hananode24 <switch2> gpfsnode01 hananode01 gpfsnode02 hananode02 ..102 . . 2015 25 .19..124 192..10.102 .20.102 192.23. bond0 bond1 n/a bond0 bond1 bond0 bond1 ..20.23.20.168.20.5____/ / MGMT| G8264 Switch |1_________/ ‘----------------------’ 17.168. Server NIC n/a bond0 bond1 bond0 bond1 . X6 Implementation Guide 1.168. .28..10.10.101 192.20.9.20.168. .19.21. .26.168.26...168.101 192.----------------------. 100 200 4095 100 200 100 200 .20.. .168..10.0/24.28.25. .22.63 (GPFS) / \ 18..20. ...27.10.. 63 64 MGMT 17 18 19 20 .63 (GPFS) Figure 9: G8264 RackSwitch front view This guide defines the IBM GPFS network to be used as 192.101 192. . 2015 26 .0/24. but it should be used consistently as the internal (private) network within this guide. X6 Implementation Guide 1. This allows up to 7 Lenovo Solution servers to be connected.24___/ / MGMT| G8124 Switch |23________/ ‘----------------------’ 1.13 (GPFS) / \ 2.----------------------.7.6.3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit This option is defined to use the G8124 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA.14 (HANA) .8.9.5.3.168.Technical Documentation 5.9.11.10.12.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The setup is as follows: 2.3.13 (GPFS) Figure 10: G8124 RackSwitch front view This guide defines the IBM GPFS network to be used as 192.6.5.7.24____ MGMT| G8124 Switch |23____\__ Inter-Switch 10Gb Link (ISL) ‘----------------------’ \_\_____Port 24 bonded ISL 1.4.4.8.11.168.20.10.----------------------.6.0/24 and the SAP HANA network to be used as 192.12. If the customer wants to use a different IP range he may do so.9.10.14 (HANA)/ \___Port 23 bonded ISL . 20.9.9. .20.11. ..107 192.4.10.10.168.10. Hostname <switch1> gpfsnode01 hananode01 gpfsnode02 hananode02 gpfsnode03 hananode03 .101 192.6.103 .10.168.168.102 192. Server NIC n/a bond0 bond1 bond0 bond1 bond0 bond1 ..48 (HANA) / \___Port 53 bonded ISL .. .168..51 or 52) to be connected.4..102 192. 13 14 MGMT-b 1 2 3 4 5 6 .7.10.101 192. ...6. .168. This allows up to 24 Lenovo Solution servers (or 32 servers with "40G -> 4x 10G" breakout cables on ports 49.12. bond0 bond1 n/a bond0 bond1 bond0 bond1 bond0 bond1 .20.168... 2015 27 .168.50. . ..8.107 <customer-mgmt IP2> 192.168.5..168.3.5. gpfsnode07 hananode07 <switch2> gpfsnode01 hananode01 gpfsnode02 hananode02 gpfsnode03 hananode03 .107 192.4 Internal Networks – Option 3 G8272 RackSwitch 10Gbit This option is defined to use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA.7..10.48 (HANA) . IP Address <customer-mgmt IP1> 192.. 192. g8124-1 g8124-1 g8124-2 g8124-2 g8124-2 g8124-2 g8124-2 g8124-2 g8124-2 .----------------------..168.10.107 gpfsnode07 hananode07 bond0 bond1 Table 18: G8124 RackSwitch port assignments 5..54____/ / MGMT| G8272 Switch |53_________/ ‘----------------------’ 1. .6.168. 100 200 4095 100 200 100 200 100 200 . Port MGMT-b 1 2 3 4 5 6 .----------------------..Technical Documentation Switch g8124-1 g8124-1 g8124-1 g8124-1 g8124-1 g8124-1 g8124-1 .20. The setup is as follows: 2. . g8124-2 g8124-2 13 14 100 200 192.168.10..10.168.168.12.168.3.10.9.47 (GPFS) / \ 2.102 192. VLAN 4095 100 200 100 200 100 200 .20.47 (GPFS) Figure 11: G8272 RackSwitch front view X6 Implementation Guide 1.101 192.20.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo..103 .20..54_____ MGMT| G8272 Switch |53_____\__ Inter-Switch 40Gb Link (ISL) ‘----------------------’ \_\_____Port 54 bonded ISL 1.11. .168.103 192.20.102 192. . ..101 192.103 192.8. 47/49.6. The setup is as follows: 2.48/50. Switch g8272-1 g8272-1 g8272-1 g8272-1 g8272-1 . . but it should be used consistently as the internal (private) network within this guide.124 192. g8272-2 g8272-2 47 48 100 200 192.102 192.124 192.10..96____/ / MGMT| G8296 Switch |95_________/ ‘----------------------’ 1. g8272-1 g8272-1 g8272-2 g8272-2 g8272-2 g8272-2 g8272-2 .168.. .102 192.. gpfsnode24 hananode24 <switch2> gpfsnode01 hananode01 gpfsnode02 hananode02 .----------------------.168.101 192.96_____ MGMT| G8296 Switch |95_____\__ Inter-Switch 40Gb Link (ISL) ‘----------------------’ \_\_____Port 96 bonded ISL 1.10.168.168.. .52.----------------------.5 Internal Networks – Option 4 G8296 RackSwitch 10Gbit This option is defined to use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA... IP Address <customer-mgmt IP1> 192.Technical Documentation This guide defines the IBM GPFS network to be used as 192.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.20.20.20. 100 200 4095 100 200 100 200 .101 192.10.3. 192..10..168.. 47 48 MGMT 1 2 3 4 . .20.47/49.168.124 gpfsnode24 hananode24 bond0 bond1 Table 19: G8272 RackSwitch port assignments Note There is no public network attached to these switches..48/50..168.85 (GPFS) X6 Implementation Guide 1..4.168. .3.. If the customer wants to use a different IP range he may do so.168.0/24..102 ....51. . . 5. VLAN 4095 100 200 100 200 ..... This allows up to 43 Lenovo Solution servers (or 47 servers with "40G -> 4x 10G" breakout cables on ports 87 and 88) to be connected.0/24 and the SAP HANA network to be used as 192.168.101 192.10.168.168..10. .51. Server NIC n/a bond0 bond1 bond0 bond1 .85 (GPFS) / \ 2.86 (HANA) / \___Port 95 bonded ISL . ..168.. Port MGMT 1 2 3 4 ..86 (HANA) .124 <customer-mgmt IP2> 192..20. 2015 28 ... bond0 bond1 n/a bond0 bond1 bond0 bond1 . . .10. Hostname <switch1> gpfsnode01 hananode01 gpfsnode02 hananode02 .102 .168.20.4.101 192.9.20. .52. but it should be used consistently as the internal (private) network within this guide.6 Administrative..143 <customer-mgmt IP2> 192.102 192. 100 200 4095 100 200 100 200 .168. g8296-2 g8296-2 85 86 100 200 192.101 192. .10.168.9.168.143 gpfsnode43 hananode43 bond0 bond1 Table 20: G8296 RackSwitch port assignments Note There is no public network attached to these switches. .168.102 .168. 85 86 MGMT 1 2 3 4 .168.101 192.168. SAP-Access and Backup Networks – Option G8052 RackSwitch 1Gbit The G8052 RackSwitch 1Gbit Ethernet switch is mainly used for the administrative networks. Server NIC n/a bond0 bond1 bond0 bond1 .20.20.20.168..101 192.10.10. . It can be used also for SAP-Access. Port MGMT 1 2 3 4 .. . VLAN 4095 100 200 100 200 .. 2015 29 .168. backup network or other client specific networks..168.. IP Address <customer-mgmt IP1> 192.6. Hostname <switch1> gpfsnode01 hananode01 gpfsnode02 hananode02 . If the customer wants to use a different IP range he may do so. The landscape is as follows: X6 Implementation Guide 1. g8296-1 g8296-1 g8296-2 g8296-2 g8296-2 g8296-2 g8296-2 .10.20.10. gpfsnode43 hananode43 <switch2> gpfsnode01 hananode01 gpfsnode02 hananode02 .168.168.102 192.. .10.0/24.10. .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.101 192.0/24 and the SAP HANA network to be used as 192..20.168. . 5.. .102 .20.143 192. .Technical Documentation Figure 12: G8296 RackSwitch front view This guide defines the IBM GPFS network to be used as 192. bond0 bond1 n/a bond0 bond1 bond0 bond1 . . 192. .168. These networks are both public and private and need to be carefully separated with VLANs. Switch g8296-1 g8296-1 g8296-1 g8296-1 g8296-1 . .20....143 192. 168.Technical Documentation 2. . but it should be used consistently within this guide.50______ 51| G8052 Switch |49______\__ Inter-Switch 1Gb Link (ISL) ‘----------------------’ \_\_____Port 50 bonded ISL 1.9.site..102 .168.. Server NIC n/a sys-mgmt sys-mgmt . 2015 30 .13.5. Please read section 5.. cust-imm24. 192. ..14.7.30.. IP Address <customer-mgmt IP1> 192. If the customer wants to use a different IP range for the Integrate Management Module (IMM) he may do so..site.5. 47 52 1 3 ..168..net .3.48 / \___Port 49 bonded ISL 52..30...30.30.30.101 192.9.net cust-imm02. .10..168.6.168. Switch g8052-1 g8052-1 g8052-1 g8052-1 g8052-1 g8052-2 g8052-2 g8052-2 g8052-2 g8052-2 Port 52 1 3 ..47 (IMM) Figure 13: G8052 RackSwitch front view This guide defines the Integrated Management Module (IMM) Network to be 192.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. ..net sys-mgmt Table 21: G8052 RackSwitch port assignments 5. .net cust-imm26. X6 Implementation Guide 1.4. ..47 (IMM) / \ 2. 47 300 192.site.11..4..11.48 52.6.124 <customer-mgmt IP2> 192.. Below is one example of how to connect the customers network infrastructure with the clustered environment. 300 4092 300 300 .7 Network Configurations in a Clustered Environment The networking in the clustered environment is the cornerstone of the Lenovo Solution. Hostname <switch1> cust-imm01.site. ) has been set up before starting the installation of the servers.----------------------.net ..30..net <switch2> cust-imm25.8.13.12. VLAN 4092 300 300 .site. .7: Setting up the Switches on page 31 for the RackSwitch setup..168..7.6. see figure 14. .125 192.14.12.10.148 cust-imm48. . .0/24.50_____/ / 51| G8052 Switch |49__________/ ‘----------------------’ 1.30.9.----------------------.3. Therefore it is important that you ensure that the network (switches. sys-mgmt n/a sys-mgmt sys-mgmt .126 .168... etc. wires.site.8. Technical Documentation Legend SAP client 1GbE 10GbE SAP HANA GPFS 10 GbE 10 GbE Interface Inter Switch Links IMM 1 GbE Bonded Interface Optional Interface 40 GbE Customer Customer Interface Zone Interface Zone 0 6 8 1 GigE 10 GigE SAP SAPHANA HANAAppliance Appliance IMM 1GigE 1 Customer Switch Choice Optional IMM 1 0 Node1 IMM 1 0 Node2 IMM 1 0 Node3 1 0 NodeN 10 GbE HANA 6 HANA 8 6 8 6 HANA 8 HANA 10GigE 1 6 8 System management SAP Business Suite 1GigE 2 Customer Switch Choice GPFS 7 GPFS 9 7 9 7 GPFS 10GigE 2 GPFS 9 7 9 Optional 3 2 5 4 10 3 11 2 5 4 10 3 11 2 5 10 4 3 11 2 5 4 10 11 Figure 14: Cluster Node Network Diagram 5.1.7 5. To change the SSH/SCP settings. 2015 31 .1 Configuring SSH/SCP Features on the Switch SSH and SCP features are disabled by default.7. use the following procedure.9.7.1 Setting up the Switches Basic Switch Configuration Setup 5.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Connect to the switch via a serial console and execute the following commands: RS RS RS RS 8XXX> enable 8XXX# configure terminal 8XXX(config)# ssh enable 8XXX(config)# ssh scp-enable RS RS RS RS 8XXX(config)# interface ip 128 8XXX(config-ip-if)# ip address <customer-mgmt IP> <customer-subnetmask> 8XXX(config-ip-if)# enable 8XXX(config-ip-if)# exit Example: Configuring gateway RS 8XXX(config)# ip gateway 4 address <customer-gateway> RS 8XXX(config)# ip gateway 4 enable Save changes to switch FLASH memory RS 8XXX# copy running-config startup-config X6 Implementation Guide 1. 255. modification of information.7.1.168. SNMPv3 allows clients to query the MIBs securely. Authentication used is MD5 • User name is adminsha (password adminsha).9. # snmpwalk -v 3 -c Public -u adminmd5 -a md5 -A adminmd5 -x des -X adminmd5 -l authPriv <hostname> sysDescr.253/24 VLAN 4095 MGMT 2 IP: 192. 2015 32 .2 Advanced Setup of the Switches For every switch in the cluster do the following: It is mandatory to setup Virtual Link Aggregation Group (VLAG) between the switches as well as a Virtual Local Area Network (VLAN) for each private network. 5) mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 1-4 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 9-12 Port: 29 Port: 17 mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 1-4 9-12 Port: 30 Port: 18 Port: 18 Port: 30 Port: 17 bond0 bond1 eth6 eth7 eth8 eth9 eth0 eth2 eth1 eth3 Port: 29 node 1 eth4 eth5 sys GPFS bond0 bond1 eth6 eth7 eth8 eth9 HANA eth0 eth2 eth1 eth3 node 2 eth4 eth5 sys Figure 15: Cluster Switch Networking Example Note Please make sure that you pick the same port of each of the two Mellanox adapters for each of the internal networks. approved by the Internet Engineering Steering Group in March. message stream modification and disclosure.2 Simple Network Management Protocol Version 3 SNMP version 3 (SNMPv3) is an enhanced version of the Simple Network Management Protocol. The following illustration shows the setup for an M-sized cluster using the G8264 RackSwitches. timeliness indicators and encryption to protect against threats such as masquerade. This reduces complexity.168. Authentication used is SHA You can try to connect to the switch using the following command. Both of the following users have access to all the MIBs supported by the switch: • User name is adminmd5 (password adminmd5).0 5. 2002. G8264 #1 G8264 #2 MGMT 1 IP: 192. data integrity checks.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.252/24 VLAN 4095 ISL VLAN 4094 Tier-ID 10 (port 1. SNMPv3 contains additional security and authentication features that provide data origin authentication. SNMPv3 configuration is managed using the following command path menu: RS 8XXX(config)# snmp-server ? The default configuration of N/OS has two SNMPv3 users by default.255.Technical Documentation 5. X6 Implementation Guide 1.7. 9.1 IBM GPFS Storage Network • Create IP interface for the GPFS storage network # Define Switch 1.10. The setting "no spanning-tree stg-auto" prevents the switch from automatically creating STG groups when defining VLANs.4 Disable Default IP Address RS 8XXX (config)# no system default-ip data 5.248 255. These instructions are for RackSwitch N/OS Version 8.Technical Documentation Note The management IP addresses are examples and need to be customized according to the customer’s network.255.255.7.168.6 Disable Routing RS 8XXX (config)# no ip routing 5.7 Add Networking For each subnetwork.7.3 Disable Spanning Tree Protocol RS 8XXX (config)# spanning-tree mode disable RS 8XXX (config)# no spanning-tree stg-auto Note Spanning-Tree is disabled globally with "spanning-tree mode disable".5 Enable L4Port Hash RS 82XX (config)# portchannel thash l4port 5.8.2 RS 8XXX (config)# vlan 100 RS 8XXX (config)# interface ip 10 # next line for the 1st switch: RS 8XXX (config-ip-if)# ip address 192. 5.10.7.7.249 255.255.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 5. you should create the following VLANs and Trunk VLAG configurations as described.8 VLAN configurations 5.0 RS 8XXX (config-ip-if)# vlan 100 RS 8XXX (config-ip-if)# enable X6 Implementation Guide 1.7.168.255. 2015 33 . Please check the RackSwitch Industry-Standard CLI Reference for the version of the CLI that correlates to the switch N/OS version.7. Newer versions may have different commands.2.0 # next line for the 2nd switch: RS 8XXX (config-ip-if)# ip address 192. 5.7. 9.Technical Documentation RS 8XXX (config-ip-if)# exit • Define LACP Trunk for each VLAN # Define on Switches 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.2 # RS 8264 ports 10-64.8.20.2 # RS 8264 ports 9-63.3 Integrated Management Module (IMM) Network • Create IP interface for the IMM network # Define Switch 1.255.255.249 255.20.255. odd (bottom) ports # RS 8272 ports 1-47. 2015 34 .0 # next line for the 2nd switch: RS 8XXX (config-ip-if)# ip address 192.0 RS 8XXX (config-ip-if)# vlan 200 RS 8XXX (config-ip-if)# enable RS 8XXX (config-ip-if)# exit • Define LACP Trunk for each VLAN # Define on Switches 1.8. 5. 5.7.248 255. odd ports # RS 8124 ports 1-21. even (top) ports # RS 8296 ports 2-88. even (top) ports # RS 8272 ports 2-48.7.255. 2 RS 8XXX (config)# vlan 200 RS 8XXX (config)# interface ip 20 # next line for the 1st switch: RS 8XXX (config-ip-if)# ip address 192.2 RS 8052 (config)# vlan 300 X6 Implementation Guide 1.168.168. odd (bottom) ports # RS 8296 ports 1-78. odd ports RS 8XXX (config)# interface port <port> RS 8XXX (config-if)# switchport access vlan 100 RS 8XXX (config-if)# lacp mode active RS 8XXX (config-if)# lacp key 1000+<port> RS 8XXX (config-if)# bpdu-guard RS 8XXX (config-if)# spanning-tree portfast RS 8XXX (config-if)# exit Repeat this for every port that needs to be configured. even ports RS 8XXX (config)# interface port <port> RS 8XXX (config-if)# switchport access vlan 200 RS 8XXX (config-if)# lacp mode active RS 8XXX (config-if)# lacp key 1000+<port> RS 8XXX (config-if)# bpdu-guard RS 8XXX (config-if)# spanning-tree portfast RS 8XXX (config-if)# exit Repeat this for every port that needs to be configured. even ports # RS 8124 ports 2-22.2 SAP HANA Network • Create IP interface for the HANA network # Define Switch 1. 248 255.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.255.2 # RS 8052 ports 1-47 RS 8052 (config)# interface port <port> RS 8052 (config-if)# switchport access vlan 300 RS 8052 (config-if)# bpdu-guard RS 8052 (config-if)# exit # RS 8052 port 48 as managment port RS 8052 (config)# interface port 48 RS 8052 (config-if)# description MGMTPort RS 8052 (config-if)# switchport access vlan 4092 RS 8052 (config-if)# bpdu-guard RS 8052 (config-if)# exit 5.96 RS 8124 (config)# interface port 23.7.9.255.249 255.0 # next line for the 2nd switch: RS 8052 (config-ip-if)# ip address 192.24 RS 8052 (config)# interface port 49.8.GPFS VLAN(S)] # next line defines the VLANs needed for the ISL on the IMM-switches RS 8052 (config-if)# switchport trunk allowed vlan add 4094.255.30.0 RS 8052 (config-ip-if)# vlan 300 RS 8052 (config-ip-if)# enable RS 8052 (config-ip-if)# exit • Set access VLAN for switchports # Define Switch 1.5 RS 8272 (config)# interface port 53.[HANA VLAN(S).255.54 RS 8296 (config)# interface port 95.Technical Documentation RS 8052 (config)# interface ip 30 # next line for the 1st switch: RS 8052 (config-ip-if)# ip address 192.168.168.30. 2015 35 .50 RS 8XXX (config-if)# switchport mode trunk # next line defines the VLANs needed on the ISL on the HANA/GPFS-switches RS 82XX (config-if)# switchport trunk allowed vlan add 4094.4 Enabling VLAG Setup • Create trunk (dynamic or static) used as ISL # one of the next four lines is valid according to the switch type RS 8264 (config)# interface port 1.[IMM VLAN(S)] RS 8XXX (config-if)# lacp mode active RS 8XXX (config-if)# lacp key 5094 RS 8XXX (config-if)# enable RS 8XXX (config-if)# exit RS 8XXX (config)# vlag enable • Define VLAG peer relationship for each VLAN # Define Switch 1 RS 8XXX (config)# RS 8XXX (config)# RS 8XXX (config)# # For each <VLAN port> RS 8XXX (config)# vlag tier-id 10 vlag hlthchk peer-ip <customer-mgmt IP2> vlag isl adminkey 5094 in <VLAN ports> vlag adminkey 1000+<VLAN port> enable # Define Switch 2 RS 8XXX (config)# vlag tier-id 10 X6 Implementation Guide 1. .27. the switch ports 22. or 86-87 respectively If the port channel configuration is needed for a stretched HA setup.the following configuration has to be applied to the switches to establish a static inter-site connection.27. the switch ports 86. the switch ports 48.19.26.28..7. The following examples are based on G8264 port layout.21.5____/ MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/ ‘----------------------’ ‘----------------------’ GPFS 17. For other supported rackswitch types following ports should be used: • G8124 solution: depending on the connection type.----------------------.64 / .23.22.23.9 Save changes to switch FLASH memory RS 8XXX# copy running-config startup-config 5. The inter-site port channel configuration depends on the customer premise equipment and infrastructure.63 / ISL HANA 18. | | HANA 18.24.5_____ ..19.25.25.------------------------------------------------..22..22.----------------------.8 Inter-Site Portchannel Configuration In a stretched HA or DR scenario a inter-site port channel needs to be configured.as described with the drawing below .64 / HANA 18.28.63 GPFS 17.2a # RS 8264 port 64 # RS 8272 port 48 # RS 8296 port 86 # RS 8124 port 22 RS 8XXX (config)# interface port <port> RS 8XXX (config-if)# switchport mode trunk X6 Implementation Guide 1.. Single Inter-Site Link .. 2015 36 .5_____ MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\ ‘----------------------’ \ ‘----------------------’ \ GPFS 17.1 Static Trunk over one Inter-Site Link If there is just one single site-interconnect available . If the port channel trunk is for a DR setup.. or 21-22 respectively • G8272 solution: depending on the connection type.24.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.19.Technical Documentation RS 8XXX (config)# vlag hlthchk peer-ip <customer-mgmt IP> RS 8XXX (config)# vlag isl adminkey 5094 # For each <VLAN port> in <VLAN ports> RS 8XXX (config)# vlag adminkey 1000+<VLAN port> enable 5.22..26.63 / ISL GPFS 17.25. This chapter describes diverse options how this configuration can be implemented.26.20.28.20.21.25.63 • Switchport Portchannel Configuration # Define Switch 1a.28. the HANA and the GPFS VLANs have to be enabled on the trunk interfaces...21. only GPFS VLANs have to be enabled on the trunk interfaces.20.24.19.20.9.27.5____/ ..24...26.64 .----------------------.----------------------.23.27.64 HANA 18. 5.8.21...23. or 47-48 respectively • G8296 solution: depending on the connection type. 26.22.19.19.24.8.24.64 HANA 18..1b.20.9. In a # stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.28.28...20.24.63 / ISL HANA 18.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 37 .2a.23.23.5____/ .27. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN.21.22.5____/ MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/ ‘----------------------’ ‘----------------------’ GPFS 17.----------------------.24.------------------------------------------------.HANA VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN.as described with the drawing below . In a # stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface...25.24..63 / ISL GPFS 17.26.22.----------------------.27.28.20.64 ..each cable should connect to two switches.5_____ .21.25.19. Only GPFS VLAN # must be enabled on the trunk interface in a DR scenario.26..HANA VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN..HANA VLAN] # The next 2 configuration statements are valid in case of DR solution.28.64 / ..24...63(64) GPFS 17.23.22.HANA VLAN] # The next 2 configuration statements are valid in case of DR solution.----------------------. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN] RS 8XXX (config-if)# exit RS 8XXX (config)# portchannel 63 port <port> RS 8XXX (config)# portchannel 63 enable X6 Implementation Guide 1. Only GPFS VLAN # must be enabled on the trunk interface in a DR scenario.25.25.22..Technical Documentation # The next 2 configuration statements are valid in case of a stretched HA solution.21...27.28.64 / HANA 18. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN..26.5_____ MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\ ‘----------------------’ \ ‘----------------------’ \ GPFS 17.26.64 HANA 18.20. instead of connecting them both to just one switch pair.63(64) | | ‘-----------------------------------------------’ Redundant Inter-Site Link (one on each switch) • Switchport Portchannel Configuration # Define Switch 1a.20.26. | | HANA 18. The following configuration has to be applied to the switches to establish one logical static inter-site connection over 2 cables.27.22.64 HANA 18.19. Redundant Inter-Site Link (one on each switch) ..23..2b # RS 8264 port 64 # RS 8272 port 48 # RS 8296 port 86 # RS 8124 port 22 RS 8XXX (config)# interface port <port> RS 8XXX (config-if)# switchport mode trunk # The next 2 configuration statements are valid in case of a stretched HA solution...----------------------.2 Portchannel over two Inter-Site Links If there are two site-interconnect fibres .28.21.20. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN] RS 8XXX (config-if)# exit 5. .5____/ MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/ ‘----------------------’ ‘----------------------’ GPFS 17.. Only GPFS VLAN # must be enabled on the trunk interface in a DR scenario.----------------------. 2015 38 .28.63(+64) | | | | | ‘--------------------------------------------+---’ ‘------------------------------------------------’ Portchannel over four inter-site links (two on each switch) • Switchport Portchannel Configuration # Define Switch 1a.86 # RS 8124 port 21...21.19.21. The following configuration has to be applied to the switches to establish one logical static inter-site connection over 4 cables.24..24.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.----------------------.63 / ISL GPFS 17.19.3 Portchannel over four Inter-Site Links If there are four site-interconnect fibres .28.63 / ISL HANA 18.5____/ .23.22 RS 8XXX (config)# interface port <port> RS 8XXX (config-if)# switchport mode trunk # The next 2 configuration statements are valid in case of a stretched HA solution.5_____ .as described with the drawing below .4.1b.20.27.22.26.2a..8.------------------------------------------------.19.HANA VLAN] # The next 2 configuration statements are valid in case of DR solution. Portchannel over four inter-site links (two on each switch) .23.26.48 # RS 8296 port 85.28....21.20.64 / .8..27.22.Technical Documentation RS 8XXX (config)# vlag portchannel 63 enable 5.two of them should be connected on port 63 and port 64 on each switch.25...63(+64) GPFS 17.24.64 # RS 8272 port 47.----------------------.4 5.20.2b # RS 8264 port 63. In a # stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface. | . RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN.64 / HANA 18.26..22.64(+63) . RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN] RS 8XXX (config-if)# exit RS 8XXX (config)# portchannel 63 port <port> RS 8XXX (config)# portchannel 63 port <port> RS 8XXX (config)# portchannel 63 enable RS 8XXX (config)# vlag portchannel 63 enable 5.8.9.HANA VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN.26.5_____ MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\ ‘----------------------’ \ ‘----------------------’ \ GPFS 17.20.24.--------------------------------------------+---...25.1 Save and Restore Switch Configuration Save Switch Configuration Locally Execute: X6 Implementation Guide 1.28.25.23. | | | | HANA 18.27.23..27.64(+63) HANA 18.22.21.25.19.----------------------. the script will check the firmware version of the switches. and reachable via ssh over network. 5./SwitchAutoConfig./SwitchAutoConfig.1: Basic Switch Configuration Setup on page 31.sh -d G8264 Note The current version of the script does not support the automated creation of G8272 and G8296 RackSwitch configurations.com:putcfg 5. We recommend to copy and paste the created configuration into the serial console of the switches. The first you enter the ssh password.sh -d G8264 X6 Implementation Guide 1.sh -h usage: .Technical Documentation # scp admin@switch. 5.2 Examples The following command will create the configurations for a G8264 switch pair./SwitchAutoConfig.sh can be used to create a basis configurations for the switch models G8124 and G8264.sh can be found in /opt/lenovo/saphana/bin/.sh -c G8264 The following command will create and deploy the configurations for a G8264 switch pair./SwitchAutoConfig. 2015 39 .7. .9 Generation of Switch Configurations The script SwitchAutoConfig.sh -c G8264) and adapt the port numbers according to table 19: G8272 RackSwitch port assignments on page 28 or table 20: G8296 RackSwitch port assignments on page 29. To obtain such configuration files generate the configuration for G8264 RackSwitch (SwitchAutoConfig. You will be asked to enter configuration details like IP addresses. . the second time the password must be entered for the deployment process.com:getcfg . 5. After the configuration part you have to enter the ssh password of the switches. Therfore it is also not possible to use the -d option.1 Script Usage .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9. SwitchAutoConfig. -d creates and also deploys the switch configurations for the chosen switch type Example: SwitchAutoConfig.example.9.4. the switches must be base configured as described in chapter 5. twice per switch.sh.2 Restore Switch Configuration Execute: # scp getcfg [email protected] [-c type] [-d type] styletypes=[G8264|G8052|G8124] -c just creates switch configurations for the chosen switch type.9. As a prerequisite for SwitchAutoConfig. The GPFS. Afterwards the configuration can be saved as described in chapter 5. Most input values like hostname or IP address need to be provided by the customer. HANA. X6 Implementation Guide 1. In this case make sure.HA or DR. Only if the configuration is complete and matches the customer requirements bring up the connection to the customer network.9.9: Save changes to switch FLASH memory on page 36. if you create the configuration for a switch connected to the customer network.9.3 Input Values All the default values are based on the Networking Guide standards. Portchannel is only needed in case of a DR or HA cluster. the script will ask for the type of port channel that has to be configured. For G8052 the script will ask for a MGMT Port. xCat and IMM VLAN IPs are IPs that reside within those VLANs.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. After the configuration deployment the switches should be checked manually. because the G8052 has no dedicated management port. 5. but can be changed if needed. that the switch is disconnected during the setup.Technical Documentation Attention Please be very careful. If portchannel should be configured. There are two port channel options . 2015 40 . Their purpose is to be able to ping server addresses within these VLANs from the switch.7. Phase 1: Installation of the operating system: Section 6. Subsections that only apply to one of these operating systems are marked accordingly. and SAP Note 2159166 – SAP HANA SPS 09 Database Revision 96 to learn about known issues and recommendations by SAP. See section 15. network configuration Reboot RAID.9.4: Phase 2 – RHEL on page 58 6. Certified hardware ordered and available: Chapter 4: Hardware Configurations on page 9 2. Phase 3: Installing IBM GPFS . GPFS configuration & installation HANA configuration & installation 1 2 3 Table 22: Installation Process and Phases Guided Installation Instructions for Single Node Installations: 1. The software installation and configuration is executed at the customer site. Note It is highly recommended to check the system setup and software versions of installed components after the complete installation process.6: Phase 3 on page 62 8. Acquiring TCP/IP addresses and host names: Section 5.2: Basic System Check on page 183 how to achieve this. It does not include the connection and replication to SAP Business Suite back end systems (such as ERP or BW). This section can be applied starting from the non-OS component DVD version 1. or 6. Phase 2: OS configuration: Section 6. Interim system check: Section 6.6.1: Preparation on page 42 3.Technical Documentation 6 Guided Install of the Lenovo Solution This section describes the installation and configuration of HANA on SUSE Linux Enterprise Server for SAP Applications 11 SP3 and HANA on Red Hat Enterprise Linux 6. Phase Actions BoMC Firmware upgrades (recommended) Reboot OS installation Reboot OS. Preparation: Chapter 6.3: Phase 2 – SLES for SAP on page 53. SAP HANA and final configuration: Section 6.5: Interim Check on page 60 7. 2015 41 .9. IBM GPFS cluster setup and SAP HANA installation. Final system check: Chapter 15: System Check and Support on page 183 X6 Implementation Guide 1.2: Phase 1 on page 48 5.96-13.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Attention Please read SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11.3: Network Configuration on page 22 4. This includes networking customization. etc.sap.com https://service. or 6.sap.com/sap/support/notes/1513496* Table 23: SAP HANA references * SAP Service Marketplace ID required Depending on the customer’s operation guidelines it might be necessary to prepare the customer infrastructure beforehand so that the HANA appliance can be integrated in a smooth and timely manner. We highly recommend the SAP HANA Installation Guides as well as the SAP HANA TOC Manual. clients. please make yourself familiar with the following links and downloads before arriving without all information that is useful. 2015 42 .com/hana* http://help.sap.sap.com/sap/support/notes/1514966* https://service.5: Interim Check on page 60 8.2: Phase 1 on page 48 6.0: Central Note SAP HANA Sizing Guide Release Restrictions Note http://experiencesaphana. 6.com/sap/support/notes/1514967* https://service. Experience SAP HANA SAP Service Marketplace SAP Help Portal – SAP HANA SAP HANA 1. so you would need to get this as well.). Interim system check: Section 6. Acquiring TCP/IP addresses and host names: Section 5. Phase 3: Installing IBM GPFS . backup & restore server.3: Network Configuration on page 22 4.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Final system check: Chapter 15: System Check and Support on page 183 6.1.6. Phase 2: OS configuration: Section 6. For details on the relevant ports please refer to the SAP HANA security guide at http://help. make sure that the appropriate network ports are opened.Technical Documentation Guided Installation Instructions for Clustered Nodes Installations: 1.1: Preparation on page 42 3. Cluster network switch setup: Section 5.1 Firewall Preparations If the customer has firewalls running between the HANA appliance and the connected components (ERP.com/hana_appliance https://service. Phase 1: Installation of the operating system: Section 6.sap. 6.4: Phase 2 – RHEL on page 58 7.9. SAP HANA and final configuration: Section 6.2 Lenovo Systems solution for SAP HANA Additional Software Stack The customer needs to have the "Non OS content for Lenovo Systems solution for SAP HANA appliance additional software stack" before the service person arrives. Preparation: Chapter 6.com/ hana_appliance → Security.1. Certified hardware ordered and available: Chapter 4: Hardware Configurations on page 9 2.7: Network Configurations in a Clustered Environment on page 30 5. Please note that these documents in turn might reference to other documentation not mentioned here. What follows are a few tips we have collected while talking with SAP.3: Phase 2 – SLES for SAP on page 53.sap.6: Phase 3 on page 62 9.1 Preparation As you might not be able to access online documentation at the customer site. A DVD should have arrived with every X6 Implementation Guide 1. Attention Mandatory update of the GCC runtime environment for SAP HANA SPS08 (Revision 80) or higher.3 Software. See 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 for details. In general you should use BoMC12 to apply the newest firmware versions before starting the OS installation. he needs to order it directly from Lenovo.16. you should first contact SAP Support (via the SAP OSS System) with a direct question regarding the latest drivers and their support. this is indicated with a 3. unless there are restrictions for certain firmware packages in table 25: Supported Firmware. or higher. See SAP Note 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6. Attention Mandatory kernel update after installation on SLES for SAP 11 SP3 to kernel version 3. The versions listed in that table have been certified with SAP. or did not receive such. Table 24: DVD Part Numbers 6.Technical Documentation system.6 to 2. P/N 00MV674 Description Remarks Supported OS SAP HANA FRU Pkg v. Updates that require a statement from Lenovo or SAP before upgrading are indicated with b. In order to do this.com/support/entry/portal/docdisplay?lndocid=LNVO-BOMC X6 Implementation Guide 1. 2015 43 .6. Attention Mandatory update of the GNU C Library is required after installation when installing SAP HANA Database revision 80 or higher.2.1. It is not possible due to legal reasons to download the DVD from the Internet.ibm. this is indicated with 7. Attention Mandatory kernel update after installation on RHEL 6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.52. Certain firmware levels have been declared as static and an upgrade to higher version is not supported.6.101-0. For details please refer to table 25.96-13 for X6 latest version SLES for SAP 11 SP3.6 Previous versions not covered by this document.47. firmware and driver versions should either be at the exact level as given here or can be above if indicated so. or higher. Firmware and Drivers The System x servers software. RHEL 6.32-504. In case a customer has lost the DVD.0.el6. If an upgrade to a higher version is supported without consultation of Lenovo/SAP.9. The other numbers are here for reference. Software and Driver Levels on page 44. 1. See SAP Note 1888072 – SAP HANA DB: Indexserver crash in __strcmp_sse42 for details. 12 https://www-947. please direct the customer to contact Lenovo support and provide part number (p/n) for the latest version from the table below. If unsure.9. Please refer to chapter 12.2.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1. see the Lenovo Operations Guide for SAP HANA appliance for further details.11.0.47.1.3.00-4288 3 System x3850 X6 Specific Firmware Component Version Integrated Management Module (IMM) TCOO08Z 3 UEFI (FW/BIOS) Flash A9E122XUS 3 DSA DSALA65 3 System x3950 X6 Specific Firmware Component Version Integrated Management Module (IMM) TCOO08Z 3 UEFI (FW/BIOS) Flash A9E122XUS 3 DSA DSALA65 3 Lenovo Networking Operating System Component Version Lenovo RackSwitch G8052 N/OS 8.3-22 3 Updates within RHEL 6.220.0 or higher 3 Lenovo RackSwitch G8264 N/OS 8.56.120-3749 3 ServeRAID M5210 Controller Firmware FW Package Build: 24.2. Software.0 or higher 3 Lenovo RackSwitch G8124 N/OS 8.6 software and drivers SAP 3 Misc.0 or higher 3 Table 25: Supported Firmware.1-0052 (for external expansion unit) FW Version: 4.0 or higher 3 Lenovo RackSwitch G8296 N/OS 8. 2015 44 .7.0-0018 (for external expansion unit) FW Version: 3.Technical Documentation SLES – OS Software and Drivers Component Version Recommended SLES for SAP Applications 11 SP3 kernel 3.6 kernel 2.7.2.2: Reboot Behavior on page 154.2.3-17.0-0052 (for internal disks) FW Version: 4.2 or higher 3 SLES for SAP Applications 11 SP3 software and drivers Updates within SP3 as allowed by SAP 3 RHEL – OS Software and Drivers RHEL 6.32-504.1.52* or higher 3 The GNU C Library (glibc) 2.9.2. Note When installing or performing upgrades.el6* 3 GCC runtime environment (compat-sap-c++) 4. Firmware and Drivers Component Version IBM General Parallel File System (GPFS) Recommended: 4.0 or higher 3 Lenovo RackSwitch G8272 N/OS 8.2-10 3 Network Security Services (nss-softokn-freebl) 3.2_20130108-0.2 or higher 3 GCC runtime environment (gcc47-runtime) 4.17.1.2.450.6 as allowed by RHEL 6.270.14.0-8 or higher 3 ServeRAID M5120 Controller Firmware FW Package Build: 23.33.7.1. the operator should be prepared to expect multiple reboots. Note UEFI and IMM firmware levels should always be updated in parallel to avoid possible contention problems between the two.16.2. X6 Implementation Guide 1.6.55-4187 3 ServeRAID M5225 Controller Firmware FW Package Build: 24. Software and Driver Levels * Update of kernel will need recompiling the GPFS drivers.101-0. Technical Documentation Warning Do not downgrade existing firmware levels unless otherwise explicitly recommended to do so by Lenovo. host name. only if RAID6 is required by customer) ServeRAID M5100/M5200 Series RAID 6 Upgrade for Lenovo System x (RAID6 can only be configured on external M5120/M5225 RAID adapters. 4.1. It is possible that they are already activated when shipped. it may happen that these parameters are reset. but after UEFI firmware updates. you need the IP address for the IMM. Please refer to section 4. 2015 45 . You can activate the FoDs via IMM: After the login go to IMM Management Activation Key Management .1 Obtaining web interface access for IMM To access the web interface of the IMM and use the remote presence feature.) The necessary documentation was shipped with the servers to the customer. To locate or change the IP address. From the setup utility main menu.5. 6. 3. Note We recommend that the customer keeps a backup of the Feature on Demand keys. When the prompt <F1> Setup is displayed. Save network settings. 6.1. press F1 . • ServeRAID M5100/M5200 Series Performance Key for Lenovo System x • ServeRAID M5100/M5200 Series SSD Caching Enabler for Lenovo System x • (optional. When the system comes from Lenovo. 2. that only with the correct card layout your machine is supported by Lenovo. select System Settings Integrated Management Module Network Configuration . confirm to restart IMM.5. Obtain or change the network settings (IP address.1: Power Policy Configuration on page 154.5 Hardware UEFI Configuration These steps are necessary before the operating system can be installed. 6.4 Card Placement Attention You may need to change the card placement.1. Turn on the server. You can modify the IMM IP address through the UEFI Setup utility. Please check in this step also the power policy settings like described in chapter 12. 6. subnet mask. it should already been set to the settings listed.1. This step must be done before the installation. 6. complete the following steps: 1. 5. Press Esc to get back to main menu. Follow the next instructions on how to configure the servers UEFI parameters correctly for use with SAP HANA appliance. gateway). X6 Implementation Guide 1. Please be aware. The machine coming from the factory may have a different card layout than we require.2 Feature on Demand Activation To be able to configure the RAID adapters correctly some Feature on Demand (FoD) keys need to be activated.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.4: Card Placement on page 15 for the assignment regarding in which slot a certain card should be.9. 4 .TurboMode Processors.3 Disable GPT Recovery 1. Choose Workload Configuration I/O sensitive . Choose Platform Controlled Type Platform Controlled .MemoryPowerManagement Processors.5. 10.9. Please check and set the settings in UEFI according to the following tables.C1EnhancedMode Processors.C-States Processors. Select System Settings .1. General Performance-optimized Settings for SAP HANA 1. Section Operation Modes Setting Value ASU tool setting Choose Operating Mode Memory Speed Memory Power Management Proc Performance States C1 Enhance Mode QPI Link Frequency Turbo Mode CPU States Package ACPI C-State Limit Power/Performance Bias Platform Controlled Type Custom Mode Max Performance Automatic Enable Disable Max Performance Enable Enable ACPI C3 Platform Controlled Max Performance OperatingModes.5. Choose Operating Mode Custom Mode . Select Save Settings and press . 3.1. Select Save Settings and press 6. Choose C1 Enhance Mode Disable . 8.QPILinkFrequency Processors. Maximum Performance . In the UEFI/BIOS select Load Default Settings .MemorySpeed Memory. Press Esc twice. Choose Power/Performance Bias 6. 5.Technical Documentation 6. 12.PackageACPIC-StateLimit Power. Note Please be aware. 11. Choose None . 3. 2015 46 . 2.ChooseOperatingMode Memory. 4. Select System Settings Operating Modes . 4.ProcessorPerformanceStates Processors. 7. that not every setting is available on every platform. Press Esc three times. Select Save Settings and press 9. Power .PlatformControlledType Table 26: Required Operation Modes UEFI settings Section Processors X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Press Esc twice. Select System Settings Recovery & RAS Disk GPT Recovery Disk GPT Recovery . 2.PowerPerformanceBias Power. Technical Documentation Setting Value ASU tool setting Turbo Mode Processor Performance States C-States Package ACPI C-State Limit C1 Enhanced Mode Hyper Threading Execute Disable Bit Intel Virtualization Technology Enable SMK Hardware Prefetcher Adjacent Cache Prefetch DCU Streamer Prefetcher DCU IP Prefetcher Direct Cache Access (DCA) Cores in CPU Package QPI Link Frequency Energy Efficient Turbo Uncore Frequency Scaling MWAIT/MMONITOR Enable Enable Enable ACPI C3 Disable Enable Enable Enable Disable Enable Enable Enable Enable Enable All Max Performance Enable Enable Enable Processors.IntelVirtualizationTechnology Processors.MemoryPowerManagement Memory.QPILinkFrequency Processors.MWAITMMONITOR Table 27: Required Processors UEFI settings Section Power Setting Value ASU tool setting Active Energy Manager Power/Performance Bias Platform Controlled Type Workload Configuration 10Gb Mezz Card Standby Power Capping Disable Platform Controlled Max Performance I/O sensitive Disable Power.PackageACPIC-StateLimit Processors.9. 2015 47 .SocketInterleave Memory.PatrolScrub Memory.ProcessorPerformanceStates Processors.MemoryMode Memory.HardwarePrefetcher Processors.EnergyEfficientTurbo Processors.RankMarginingTest Table 29: Required Memory UEFI settings X6 Implementation Guide 1.C1EnhancedMode Processors.TurboMode Processors.10GbMezzCardStandbyPower Table 28: Required Power UEFI settings Section Memory Setting Value ASU tool setting Memory Mode Memory Memory Speed Memory Power Management Socket Interleave Memory Data Scrambling PatrolScrub Mirroring Sparing RankMarginingTest Independent Max Performance Automatic NUMA Enable Enable Disable Disable Disable Memory.C-States Processors.Hyper-Threading Processors.DCUIPPrefetcher Processors.Sparing Memory.PlatformControlledType Power.ExecuteDisableBit Processors.PowerPerformanceBias Power.DirectCacheAccessDCA CoresinCPUPackage Processors.UncoreFrequencyScaling Processors.Mirroring Memory.AdjacentCachePrefetch Processors.EnableSMX Processors.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.WorkloadConfiguration Power.ActiveEnergyManager Power.DCUStreamerPrefetcher Processors.MemorySpeed Memory.MemoryDataScrambling Memory. 6. Hard Disk 0.Technical Documentation 6. There are different possible setups for the RAID controllers with different numbers of SSDs and HDDs: • M5210 (on x3950 X6: first internal) – 2 SSDs + 6 HDDs: 1 × RAID1 for OS.8.9. The only manual step the installing person has to do is to configure the RAID1 for the OS.8. The following tables are meant as an overview and a reference in case that the automated RAID configuration is not working properly.5 Boot Order The installer supports (starting from release 1. either by wiping it or by recreating the RAID1 VD for the OS.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.6 1. 2015 48 .7. Note When you reinstall a system but changed the Legacy/UEFI Mode.2.80-10) only the installation in UEFI Mode. Tables 31: x3850 X6 RAID Controller Configuration on page 49 and 32: x3950 X6 RAID Controller Configuration on page 50 describe possible configurations of the RAID controllers. Attention You must not activate UEFI Secure Boot – it is disabled by default – because the installation of GPFS and other software add-ons will fail.9.1. 1 × RAID5 for GPFS • M5210 (only x3950 X6.96-13 Grub Table 30: Boot options and boot loaders used If you want to install in UEFI Mode. make sure the partition table is cleared.80-10. second internal) – 2 SSDs + 6 HDDs: 1 × RAID5 for GPFS • M5120/M5225 – 2 SSDs + 9 HDDs: 1 × RAID5 for GPFS – 2 SSDs + 10 HDDs: 1 × RAID6 for GPFS – 2 SSDs + 18 HDDs: 2 × RAID5 for GPFS – 2 SSDs + 20 HDDs: 2 × RAID6 for GFPS X6 Implementation Guide 1. After successful installation there will be a new entry on top of the list for the newly installed operating system. For the boot loaders used see table 30.5. 6. Type Supported from Boot loader SLES 11 SP3 1. you do not have to change the boot order at all. The default boot order is: CD/DVD Rom. PXE Network.2 Phase 1 The Lenovo Systems Solution for SAP HANA appliance is ready for an installation with the factory provided image.70-8 ELILO RHEL 6.1 Storage Configuration – RAID Setup The RAID configuration of all RAID5 and RAID6 arrays is executed by the automated installer starting with release 1. CacheCade enabled CacheCade of VD1 (8+p) RAID5 (8+2p) RAID6 GPFS. 13 Optionally: +2 SSDs for CacheCade RAID1.Technical Documentation – Optionally: +2 SSDs13 Controller Models M5210 all M5120/ M5225 Single node: ≥ 768GB. but may require additonal hardware. 2015 49 . * There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to the controller. † RAID1 for all CacheCade arrays is possible. For details on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1 Configuration X6 Implementation Guide 1. See section CacheCade RAID1 Configuration in the Operations Guide for X6 based models for more details.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9. CacheCade enabled RAID0 CacheCade of VD0 Table 31: x3850 X6 RAID Controller Configuration. Cluster: ≥ 512GB VD ID Type 0 HDD Physical Drives 2 1 HDD 4 (3+p) RAID5 2 SSD 2 RAID0† 0* HDD 9 10 1 SSD 2 Config Comment RAID1 VD for OS GPFS. X6 Implementation Guide 1.Technical Documentation Controller Models 1st M5210 all 2nd M5210 1st M5120/ M5225 Single node: ≥ 768GB. ** This number will depend on the availability of VD1 † RAID1 for all CacheCade arrays is possible. Cluster: ≥ 3072GB VD ID Type 0 HDD Physical Drives 2 1 HDD 4 (3+p) RAID5 2 SSD 2 RAID0† 0 HDD 6 (5+p) RAID5 1 SSD 2 RAID0 0* 10 1* nd 2 M5120/ M5225 Single node: ≥ 12. CacheCade enabled RAID0 CacheCade for VD0&1 (8+p) RAID5 (8+2p) RAID6 GPFS. CacheCade enabled RAID0 CacheCade for VD0 Table 32: x3950 X6 RAID Controller Configuration.288. Cluster: ≥ 6144GB 3rd M5120/ M5225 Single node: ≥ 12. Cluster: ≥ 2048GB Single node: ≥ 6144GB. See section CacheCade RAID1 Configuration in the Operations Guide for X6 based models for more details.288GB. 2015 50 . * There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to the controller. CacheCade enabled GPFS. Cluster: ≥ 4096GB HDD 9 0* SSD 2/4* HDD 9 10 1* HDD 9 10 1/2* SSD 0* HDD 2 9 10 1 SSD 2 Config Comment RAID1 VD for OS GPFS. Cluster: ≥ 512GB Single node: ≥ 3072GB. CacheCade enabled CacheCade for VD1 (8+p) RAID5 (8+2p) RAID6 (8+p) RAID5 (8+2p) RAID6 RAID0 (8+p) RAID5 (8+2p) RAID6 (8+p) RAID5 (8+2p) RAID6 GPFS. but may require additonal hardware.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. CacheCade enabled GPFS. CacheCade enabled CacheCade for VD1 GPFS. CacheCade enabled CacheCade for VD0&1 GPFS.9.288GB. Cluster: ≥ 6144GB HDD 9 10 1/2** Single node: ≥ 12. contains all files that are needed for a successful installation of the appliance. Select the internal RAID controller. 9. (If this is not possible.) 7. The "Additional Products" DVD contains additional files for SAP HANA that are not required for a successful installation.2. 6. 3. Select Create Profile Based Virtual Drive .1 Starting the MegaRAID Configuration Tool 1. 2. Select Main Menu Configuration Management . Warning At this point. (If shown) Select Manage Foreign Configuration Clear Foreign Configuration . Installations via USB drives are supported.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. There are two Lenovo DVDs shipped besides the DVDs of the operating system media kit.2 Mounting Installation Images using the IMM Virtual Media Center Using the IMM. 6. Leave the controller configuration. See table 34: DVD/ISO Media Install Options on page 52. You can determine the first internal controller by the smaller bus number on the right side of the "Storage" view.Technical Documentation Device Partition # /dev/sda 1 2 3 Partition Name* /dev/sda1 /dev/sda2 /dev/sda3 4 5 /dev/sd[b-z] Size File system Mount Point 148MB 64GB 32GB vfat ext3/4 swap /dev/sda4 148MB vfat /dev/sda5 64GB ext3/4 100% GPFS /boot/efi / (none) /var/backup/ boot/efi /var/backup /sapmnt (sapmntdata) Unpartitioned (whole device) Table 33: Partition Scheme for Single Node and Cluster Installations * The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP. 10. Select Generic RAID 1 . only configure the first controller as described here. 2015 51 . press Esc and select Configuration Management again. The "Lenovo Installation" DVD (Lenovo non-OS components). the machine can be booted into the installation media.2.9. In the UEFI main menu select System Settings Storage . The server software installation process varies slightly depending on how the mounted software images are attached to the server. The RAID1 must be configured on HDDs. Directions on how to use the IMM can be found in the Lenovo server installation guidelines respective to the System x model purchased. Other RAID arrays are generated automated in phase 3 of the setup. only the RAID1 for the OS will be configured. 8. Select Clear Configuration and confirm. 5. If you want to have these files automatically transferred to X6 Implementation Guide 1. 4. If your server has two M5210 controllers. Select Save Configuration and confirm. This section describes the different image mounting methods and the available options to install the images for each method.1. 6. installed and initially configured. 2.3 Starting the Automatic Installation Process • SLES. use – Press e + to select Red Hat Enterprise Linux 6.2. After the system reboot phase two of the installation will begin. files for SLES 1 2 Order in Virtual Media Manager 3(1st) 3(2nd) USB Stick 3(3rd) 3(4th.9. Note Continue with Section 6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. • RHEL. or 6. 1. DVD/ISO Media Option SLES for SAP/RHEL Lenovo non-OS Components RHEL for HANA or Compat. When installing RHEL there is an additional RHEL for HANA DVD shipped containing necessary compatibility RPMs. UEFI Mode: After you mount the software images for the execution of phase one install. again to edit the kernel paramters.Technical Documentation the server(s) during installation. X6 Implementation Guide 1. • SLES and RHEL: The media will automatically install the SLES for SAP or RHEL operating system. Press F10 . you must use option 1 in table 34. – In the boot-option screen. use + to select Installation and press e . You will not need to touch this system at this point. files for SLES Additional Products SLES for SAP Lenovo non-OS Components RHEL for HANA or Compat. optional) 3(1st) 3 3(2nd) Table 34: DVD/ISO Media Install Options 6. 1. The installer will copy the extra software necessary for the SAP HANA product (GPFS and other software add-ons).4: Phase 2 – RHEL on page 58. – Go to the line linuxefi /boot/x86_64/loader/linux. – Press and then b . Option 1. restart the system and wait until the black boot-option screen from SUSE is displayed. The machine will be properly partitioned. – In the boot-option screen. UEFI Mode: After you mount the software images for the execution of phase one install. Option 1 in table 34: Append ks=cdrom:/ks. and 2 in table 34: Append autoyast=usb:///.3: Phase 2 – SLES for SAP on page 53. We recommend not mounting this DVD.6 and press e .cfg. restart the system and press any key as soon as the RHEL boot loader starts to enter the boot options menu. Option 2 in table 34: Append ks=hd:sdb:/ks. 2015 52 .cfg. When installing SLES for SAP there is an additional SLES DVD shipped containing necessary compatibility RPMs. change into a console or open a terminal and execute service openibd start. Ensure that the customer accepts the SUSE(R) Linux Enterprise Server for SAP Applications 11 SP3 – SUSE Software License Agreement.3 Phase 2 – SLES for SAP Warning 1. 4. If you do not do this. Select Next . 2015 53 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. press Ctrl + Alt + exit to close the console.9. X6 Implementation Guide 1.Technical Documentation 6. If you had to restart the server in one of the next steps and you see this screen again. Select Next . 2. At the welcome screen select Next . then enter the command and then enter To open a console. + x . Select Next . you will not be able to configure the network correctly in later steps. Assign the server’s host and domain name according to customer’s wishes. Figure 16: License Agreement 3. On the next screen enter keyboard preferences. Figure 18: Network Configuration X6 Implementation Guide 1. 2015 54 . The networking adapters need to be configured to the customers network landscape.9. This is left to the customer and service personnel to properly define in advance.Technical Documentation Figure 17: Hostname and Domain Name 5. Depending on the customer’s network infrastructure the other Ethernet adapters need to be modified according to table 15: IP address configuration on page 23.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Warning If not changed.168.2. 2015 55 . Ports of NICs that are placed in the server as a replacement for the Mellanox cards will be named starting from eth100.e. Do not use the preset values in the fields IP address and hostname. bond0 and bond1 will be empty (i. as "IP Address" enter "127. This is a private network and does not need to be connected to the customers network landscape. • Click Next . the installation will fail at a later point.20.2. as "IP Address" enter "127. X6 Implementation Guide 1.g.g 192.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. as "Netmask" enter "/24". (c) Single node: If the customer wishes to use the 10Gb adapters for his client access. It is not necessary in a single node installation to use two adapters.Technical Documentation (a) Click on the green highlighted and underlined Network Interfaces .g 192. only that one adapter is assigned with the correct private networking host names and IP addresses.101/24 hanaode01 for bond1. (e) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN / 127. • Delete bond0 and bond1: Select the interface and click Delete .168. • As "Alias Name" enter "GPFS".1".1.1.1 (e.20.1". 192. but still be present.168. Follow the advice given by the customer in table 15: IP address configuration on page 23. (b) There are two bonded devices configured for the Mellanox adapters. for external communication) and click Edit . no slave interfaces).g. This is used by default for the IBM GPFS and SAP HANA private networks and should not be changed. Note In case the customer plans to scale out the single node installation to a cluster by adding more severs: Plan the network configuration for the GPFS and HANA networks as if the cluster was already present to simplify a later scale out. Please see figure 19 on page 56.101/24 gpfsnode01 for bond0 and 192. • Select an interface that will be configured (e. e.0. • In the Address tab select Add .101/24) in order to properly auto-configure the private network. Click OK . • As "Alias Name" enter "HANA". It is also important to modify the host name/IP address pair of hananodeNN / 127. Follow the advice given by the customer in table 15: IP address configuration on page 23. Click OK .10.101/24) in order to properly auto-configure the private network. as "Netmask" enter "/24".0. Please change the value in the marked black box to reasonable values.168. then you need to change the adapter used for each of these bonded adapters.9.10. • In the Address tab select Add .0.0.1 (e. (d) Single node without Mellanox cards: If the machine is configured without a Mellanox card. Warning You have to assign the correct IP and the fully qualified domain name of the server to the interface that will be connected to the customer’s network. X6 Implementation Guide 1. On the next screen enter clock and time zone information. search domain(s) and routing information and add any missing information. domain name.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9. Select Next . 6. name servers.Technical Documentation Figure 19: Cluster Node NIC Configuration dialog bond0 (f) Under the tabs Hostname/DNS and Routing confirm host name. Select Next . 2015 56 . Enter the root password. 2015 57 .9. Select Next . Select OK . Figure 21: Advanced NTP Configuration 8.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation Figure 20: Clock and Time Zone 7. It is mandatory to configure it on cluster nodes and highly recommended to configure it on single node installations. X6 Implementation Guide 1. A network time protocol (NTP) server should be configured. Warning Mandatory Kernel Update on SLES for SAP 11 SP3: At the time this document is created. Register the SLES System using the supplied envelope in the customers delivery. Select Forward .47. select "Synchronize date and time over the network". For cluster installations the configuration of an NTP server is mandatory. 2015 58 . For single node installations it is highly recommended. Please see 13. since this is not possible at this time due to missing network configuration.4: Linux Kernel Update on page 165 for the steps needed to be performed.0. On the "Installation Completed" screen press Finish .9. kernel version 3. Follow the instructions in section 6. Configure the keyboard layout and select Forward . you can create a further (non-root) user on this machine. Ensure that the customer accepts the license agreements for RHEL. Skip the software registration. 2. 6. Please consult SAP if there is now a higher version recommended. 3. Enter a root password and select Forward . If the customer wants to. Then select Forward . At the "Welcome" screen click Forward .4 Phase 2 – RHEL 1. 6. 11. if there is no Internet access. 10.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Do not forget to register the system after the successful installation. X6 Implementation Guide 1.101-0. Configure the time servers.52 is mandatory for SLES for SAP 11 SP3. 5.5: Interim Check on page 60.Technical Documentation Figure 22: Password for the System Administrator 9. remove the default time servers. 4. In the time configuration. 7. This is left to the customer and service personnel to properly define in advance. ifcfg-eth4. only that one adapter is assigned with the correct private networking host names and IP addresses. This is a private network and does not need to be connected to the customers network landscape.g.20.168. (a) There are two bonded devices configured for the Mellanox adapters. Do not use the Device configuration option. Deselect "Enable kdump?". At the end the file should look like this: 1 2 3 4 5 6 7 8 9 10 DEVICE=eth[X] TYPE=Ethernet UUID=[UUID] ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none IPV6INIT=no IPADDR=[IP address] NETMASK=[netmask] GATEWAY=[gateway] The networking adapters need to be configured to the customers network landscape. Select Forward . • As "Hostname" enter the fully qualified domain name. Log in as root user. then select No . It is not necessary in a single node installation to use two adapters. Configure /etc/hosts: Add a line for gpfsnodeXX and hananodeXX (where XX is the node number. in /etc/sysconfig/network-scripts/. then you need to change the adapter used for each of these bonded adapters. 10. This is used by default for the IBM GPFS and SAP HANA private networks and should not be changed. 9. for example: 1 2 3 192.g. 2015 59 . Note In case the customer plans to scale out the single node installation to a cluster by adding more severs: Plan the network configuration for the GPFS and HANA networks as if the cluster was already present to simplify a later scale out.10. 01) and a line for the external IP and hostname.domainname myhananode10 12.) Make sure that the file contains the line ONBOOT=yes but the line HWADDR= is deleted. Select Finish . Select the timezone tab and select the correct timezone.110 gpfsnode10 192. • As "DNS search path" enter the domain. 11. Configure the interfaces via the files ifcfg-bond0. Execute system-config-network and select DNS configuration.Technical Documentation 8. Depending on the customer’s network infrastructure the other Ethernet adapters need to be modified according to table 15: IP address configuration on page 23. they are the slaves of bond0-1.110 hananode10 10. Edit the configuration file of the network device for the external communication. (b) Single node: If the customer wishes to use the 10Gb adapters for his client access.168. and ifcfg-bond1 in the directory /etc/sysconfig/network-scripts/.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. e. 13.9. X6 Implementation Guide 1. • Enter the DNS servers. e. (Do not change the settings for eth0-3.10 myhananode10.10.10. 168.g. it is a good practice to ensure that you can access all machines on the network and that each node is ready to install and configure the SAP HANA appliance software. Attention Mandatory kernel update after installation on RHEL 6.0.168.5 Interim Check Before starting phase three.101/24) in order to properly auto-configure the private network. the installation will fail at a later point.168. Try to connect via SSH to the machine to ensure the network connectivity. Ports of NICs that are placed in the server as a replacement for the Mellanox cards will be named starting from eth100.2.168.101/24) in order to properly auto-configure the private network. Review the physical partitions (sdx): 1 # cat /proc/partitions | awk '{ print $4 }' | sort 2.6. At the time this document is created.10.1 (e. Execute 1 service network restart to load the new network configuration. bond0 and bond1 will be empty (i. and ifcfg-bond1 in the directory /etc/ sysconfig/network-scripts/.1 (e. See SAP Note 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6.Technical Documentation (c) Single node without Mellanox cards: If the machine is configured without a Mellanox card. e.16.6.2.101/24 gpfsnode01 for bond0 and 192.g 192.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. You can use the following commands to determine that each system is ready for the cluster install.9. There is no need to change the IP addresses of both bonded interfaces and can remain 127.4: Linux Kernel Update on page 165 for the steps needed to be performed. 2015 60 .0.32-504. no slave interfaces). Do not use the preset values in the fields IP address and hostname.1. or higher.10. This command must properly show the node itself (not every node): 1 # cat /etc/hosts | grep gpfsnode 3. Configure the interfaces via the files ifcfg-bond0. 14. On every node run the following commands and check that they are consistent with the cluster you are about to install: 1.6. is mandatory for use with SAP HANA.2. Reboot the server.el6. 6.e.0. This command must properly show the node itself (not every node): X6 Implementation Guide 1. 192. Warning If not changed.1 and 127. Please see 13. but still be present.1.1. It is also important to modify the host name/IP address pair of hananodeNN / 127. Please change the value in the marked black box to reasonable values.0.g 192. (d) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN / 127. kernel version 2. Please see figure 19 on page 56.20.101/24 hanaode01 for bond1. Follow the advice given by the customer in table 15: IP address configuration on page 23. Follow the advice given by the customer in table 15: IP address configuration on page 23.20. 2.1 Installation of Mandatory Packages Attention The following steps are mandatory for a successful installation of the appliance. you should correct them and repeat this test before starting with phase three.5.168. Ensure all servers Except for the server’s own adapter.rpm 6.14. The following are reachable.Technical Documentation 1 # cat /etc/hosts | grep hananode 4.168.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.2←.0/24 5.zip -d /tmp zypper install /tmp/libgcc_s1-4.17.rpm /tmp/libstdc++6-4.0/24 192.6 For RHEL the compatibility pack is shipped on an additional DVD.x86_64. Ensure the SAP HANA private network is set up correctly: 1 # cat /proc/net/bonding/bond1 7. 6.rpm Install the update for nss-softokn-freebl from the official repositories: 1 yum update nss-softokn-freebl Or if the customer downloaded the RPMs: 1 yum install nss-softokn-freebl-3. Check the time settings and NTP: 1 2 # ntpq -p # date If any of these values are not as expected.rpm nss-softokn-freebl←.el6_6.3-22.7.2 1 RHEL 6.x86_64.3-22.14. MAC address are shown and can be used for the right servers were found and not other servers in the same network reachable network connections: # nmap -sP 192. Ensure the IBM GPFS private network is set up correctly: 1 # cat /proc/net/bonding/bond0 6.x86_64. 6.1.el6_6. 1 2 unzip [mount point of DVD]/gcc47-runtime.→-3.rpm See also SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11.17.5. verifying that through other 1 command lists all reachable servers in both internal networks. yum -y install [mount point of RHEL for HANA DVD]/Packages/compat-sap*.1.10.2.7.→_20130108-0.2_20130108-0.20. X6 Implementation Guide 1. 2015 61 . Due to legal restrictions these steps are not automatically executed by the installation program.1 SLES for SAP 11 SP3 Install the updates for libgcc_s1.9.x86_64.5. and libstdc++6 shipped on the extra DVD shipped with the appliance. 2 Installation without Network Connectivity Attention Phase three needs uplink network connectivity and working DNS resolution to execute properly. the SAP HANA Database Installation may break while trying to determine the hardware requirements. 14 https://www-947.Technical Documentation 6. It is recommended to call the configuration tool on the first node but it can be started on any node of the cluster. Test this by pinging the external host name of all nodes on every node before continuing with the next phase.zip. Attention Not providing the most recent HANA hardware check script may cause the HANA installation to fail. Either from the console.6 6. Make sure that the Lenovo non-OS components DVD is still mounted via IMM (or USB thumb drive).6. Copy the ZIP file to the server to /root/HanaHwCheck. you are connecting via SSH from a machine that is not set for the English language. Phase three starts after the machine has rebooted and you have ascertained that all the networking is working. or from a SSH connection.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. skip this step.ibm. If you used BoMC14 in an earlier step to install all available firmware updates on this server. if it finds this file at this location. 6. see screenshot above) to the /etc/hosts file on all nodes so that every node can resolve the external host name of the other nodes. Attention In case.9. you must set the LANG environment variable to "C" beforehand. Note Firmware bugs in older firmware versions may lead to decreasing performance or even data loss. and the HDDs and SSDs run with the latest firmware. # export LANG=C Download the latest hardware check script from SAP Note 1658845 – Recently certified SAP HANA hardware/OS not recognized. you may call the Lenovo SAP HANA appliance configuration tool. If there is no connectivity to the customer’s network and DNS server use this workaround: Add the external host names specified in step 9 of phase 2 (dialog "SAP HANA Configuration".6.2 HANA Installation Attention The SAP HANA installation packages are copied to the node in this step. The automated (Lenovo) installer will update the HANA hardware check script automatically.com/support/entry/portal/docdisplay?lndocid=LNVO-BOMC X6 Implementation Guide 1. 6.1 Phase 3 Verification of RAID Controller and HDD/SSD Firmware Ensure that the RAID controllers. 2015 62 . If not. or the installation will fail.5. Figure 24: GPFS IP Configuration Dialog 7. 8. Select OK . Make sure that gpfsnode01 is assigned to the correct IP. 2015 63 . Note Currently the default and recommended value is /sapmnt. Nowadays SAP recommends to use /hana and this may become the default path in future releases. Confirm the shared filesystem mountpoint for HANA. Check that the appliance was detected correctly and confirm with . HANA will be installed below this path.6. Figure 23: Installation Mode Selection 4. Select OK .Technical Documentation 6. Select Single Node and confirm. not yet existing path. 10. 11. 3. Choose.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. but for new installations in legacy environments /sapmnt is strongly recommended. 6.9. # saphana-setup-saphana. Select OK . Read the License Agreement. Enter an Instance Number. Press to select OK . 2. if you want to get the RAID arrays configured automatically. The IBM GPFS internal name of this filesystem will still be sapmntdata in any case.sh 1. Both paths are supported. or enter a customized value. Use to select Accept and press . Only choose No . Enter a SID. if the RAID was already configured before. or enter a customized value.2.1 1 Single Node Installation Execute the following command as root user. 9. Confirm the User ID of the HANA user. You can choose any other absolute. We recommend to choose Yes . such as /hana. Repeat for hananode01. X6 Implementation Guide 1. Accept the external hostname or set the correct value. Select OK . 5. Select OK .Technical Documentation 12. Execute the following command as root user only on the first node in the cluster after the previous step was completed for every node in the cluster. Confirm the Group ID of the HANA user. or enter a customized value. Use to select Accept and press . if you want to get the RAID arrays configured automatically. We recommend to choose Yes . 4. 1 Execute the following command as root user on every node in the # saphana-setup-saphana.2 Cluster Installation cluster. 3. or enter a customized value. Make sure that the gpfsnode entries are assigned to the correct IPs. Select OK . 2015 64 .2. X6 Implementation Guide 1.sh 1. Select Cluster (Master) and confirm. Check that the appliance was detected correctly and confirm with . 8. not yet existing path. 5. Enter the SAP HANA password. 6. Figure 25: HANA Password Input Dialog 14. Enter the number of nodes in the cluster. Enter the number of standby nodes in the cluster. Select OK . Select OK . 6. Check that the appliance was detected correctly and confirm with . Press to select OK . Read the License Agreement. 1 # saphana-setup-saphana. 6. 4. Choose. Confirm the password.6. Accept the external hostname or set the correct value. Select OK . You can choose any other absolute.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. if the RAID was already configured before. 7. Read GPFS license agreement and accept the with "1". Read GPFS license agreement and accept it with "1". . Select OK . Select Cluster (Worker) and confirm. such as /hana. Confirm the shared filesystem mountpoint for HANA. 15. 5. 2. 3. Read the License Agreement. Only choose No .sh 1.9. Use to select Accept and press 2. 13. Repeat for hananode entries. 6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. but for new installations in legacy environments /sapmnt is strongly recommended. Nowadays SAP recommends to use /hana and this may become the default path in future releases. 6. Please also review SAP Note 1906381 – Network setup for external communication for an overview how HANA can connect to the client network. Select OK .Technical Documentation Note Currently the default and recommended value is /sapmnt. Select OK . Enter the SAP HANA password. 11. Enter a SID. Note Follow the instructions in Section 7: After Installation on page 66. Enter an Instance Number. Select OK . Confirm the User ID of the HANA user. X6 Implementation Guide 1.1: Single Node with HA Installation with Side-car Quorum Solution on page 103. Confirm the password. The IBM GPFS internal name of this filesystem will still be sapmntdata in any case. 12.9. Select OK . Select OK . 13. 9. 2015 65 . Please refer to there when installing a simple single node HA solution.3 Single Node with HA Installation with Side-car Quorum Solution Adding a second node for high availability is described in section 10. 14. Select OK . Both paths are supported. Confirm the Group ID of the HANA user. 10. or enter a customized value. or enter a customized value. 9. For other means to use the HLM or if the HLM is not accessible please refer to the SAP HANA Update and Configuration Guide 15 . 2015 66 . They only are allowed to point to the GPFS filesystem if it is used as a staging area for a third party backup solution. To install the SMD via the HANA Lifecycle Manager open a browser and navigate to https: //<HANAServerHostname>:1129/lmsl/HLM/<SID>/ui?sid=<SID>. • Check. the appliance will experience an out-of-space condition on the IBM GPFS (/sapmnt/). The installation of SAP Solution Manager Diagnostics Agent is documented in the chapter Adding a Solution Manager Diagnostics Agent on an SAP HANA System in the aforementioned guide. SSH is required for IBM GPFS and is configured accordingly. See SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery (No. System Landscape Directory) the SMD must be installed in preparation. Permanent backups on the GPFS are unsupported. The SAP Solution Manager Diagnostic Agent can be installed via the SAP HANA Lifecycle Manager (HLM). Skip the registration forms for the Solution Manager and the System Landscape Directory if you do not wish to register the HANA installation at this time. If the log mode is wrong.com/hana_appliance X6 Implementation Guide 1.Technical Documentation 7 After Installation After the installation of the Lenovo Solution you have to take several actions to ensure that the installation is correct.1 Actions to insure the correctness of the installation • At first execute a system check (see Section 15. • On x3850 X6 and x3950 X6 servers you can create a symbolic link from /sapmnt/<SID> to /sapmnt/ shared/<SID> to simulate the GPFS filesystem layout of eX5 based appliances. choose Add Solution Manager Diagnostics Agent (SMD) and follow the instructions on screen.2: Basic System Check on page 183) with the latest version of the check script.g. if you use scripts or other tools that use this path hard coded: 1 ln -s /sapmnt/shared/<SID> /sapmnt/<SID> • Install the SAP Solution Manager Diagnostics Agent (SMD).96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.sap. If not you can either reboot every server in the cluster or start it by executing on every server: 1 service sapinit start 15 Obtainable from http://help. 26 What general configuration options do I have for saving the log area?). Warning Update the kernel and IBM GPFS to the suggested levels. • Make sure that the backup paths are configured correctly. If the customer plans to integrate the new HANA server(s) into his existing SAP management infrastructure (SAP Solution Manager. not allowing SSH logins). 7. • Check that the HANA log mode is configured correctly. Attention Do not change the SSH configuration for the root user (e. if the SAP Host agent is running on every server. Follow the instructions given by the check script to prevent unwanted behaviour of the appliance. Earlier versions of GPFS and the kernel have known bugs that may cause the appliance to stop working. 2015 67 .Technical Documentation 7. This depends highly on the setup of the customer network. A good overview of the possibilities gives: SAP Note 1906381 – Network setup for external communication X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9.2 HANA Network Setup There are several options how to setup HANA regarding the connection on the client network. and active site are used interchangeably in this document to refer to the location where the productive SAP HANA HA is initially set up and used. The solution is implemented in two physically independent locations. flood. The goal of DR is to enable a secondary data center to take over production services with exactly the same set of data as stored in the primary site’s data center. Site C will refer to the quorum or tiebreaker site. Synchronous data replication between the primary and secondary site ensures zero data loss (RPO=0). or hurricane. and passive site all refer to the second location where the productive SAP HANA HA system is copied to in the case of a disaster.1 Terminology The terms site A.Technical Documentation 8 Disaster Recovery The scope of this section is to provide a guide for the Lenovo Disaster Recovery (previously SAP Disaster Tolerance) solution for SAP HANA . This allows the protection of a data center against events like power outage. backup site. After a failover the naming of these two sides may be swapped. with one location used as the production site and the second which serves as the backup or disaster site. SAP also uses the terms Disaster Recovery (DR) and Disaster Tolerant (DT) interchangeably.1 Architecture This sections briefly explains the architecture of the Lenovo DR solution for SAP HANA and provides examples how it can be installed in a standard two-tier or three-tier data center environment. Site C Quorum Node (Optional) Site A Site B FS: sapmntdata FS: sapmntdata Synchronous Synchronous Replication Replication Figure 26: DR Architectural Overview 8.1. 2015 68 . primary site. site B. Similarly. 8. A third optional location is possible for a tie breaking (quorum) feature of GPFS. fire.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. We will try to be consistent and use DR in this document. X6 Implementation Guide 1.9. The time required to recover the services (RTO) is different for each installation depending upon the exact client implementation. depending whether the customer wants to switch back as soon as possible or keep using the former backup site as the primary site. There are two separate SAP HANA clusters on both sites that can access data in this single shared file system. GPFS tends to either cluster blocks on a node or stripe them across multiple nodes. Latency is a term that can be split into many different categories such as: network latency.1.1. Synchronous data replication built into the file systems ensures that at any given point in time there is the exact same data in both data centers. 2015 69 . The same applies to distribution over disks within a node. or application latency.1.1 FG 2. Currently SAP is considering this value on a case per case basis.2 FG 2.0. Currently.1 FG 1.1 FG 2. the only architectural requirement is that both sites have the same number of server nodes and each site has the same number of network switches as the existing Lenovo HA cluster offering. Each site can be planned as a standard Lenovo HA cluster with the same hardware requirements as the standard solution. It is also dependent on whether you use On Line Analytical Processing (OLAP) or On Line Transaction Processing (OLTP) workloads.0. The Lenovo DR solution for SAP HANA works with a total of three data copies.2 FG 1.0.1 FG 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. each of which has their own values necessary for a proper DR setup. Warning As of December 2012. SAP has published an end-to-end value of 320µs latency between the synchronous Sites of a DR cluster.2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 OS OS OS OS OS OS OS sdb2 OS Figure 27: DR Data Distribution in a Four Node Cluster X6 Implementation Guide 1. Site A Site B node2 node3 node4 node5 node6 node7 node8 HDD HDD HDD HDD HDD HDD HDD HDD fio fio first replica node1 synchronous meta data third replica second replica syn chr ono us fio fio fio fio fio fio FG 1.1. Depending on the file size and actual disk space usage of a certain node.Technical Documentation 8.2 FG 2.1. It is known by both SAP and Lenovo that this number of itself is not enough to describe if the SAP HANA database can recover from a disaster or not. The second copy is stored on any other node except the writing node and the third copy is always stored on any node on the remote site.2 Architectural overview The Lenovo DR solution for SAP HANA can be thought of two standard Lenovo HA clusters in two different sites combined into one large cluster. The first copy is kept local to the writing node. Figure 27: DR Data Distribution in a Four Node Cluster on page 69 shows the high-level architecture.9. The idea of Lenovo DR solution for SAP HANA is to have one stretched IBM GPFS cluster spanning both sites and providing one file system for SAP HANA. and it is important that you discuss these values with your customer and the SAP consultant on site.0. 2015 70 . 2x HANA internal node5 node6 node7 node8 Figure 29: DR Networking View (with no client uplinks shown) X6 Implementation Guide 1. For Figure 28: Logical DR Network Setup the configuration of the inter-site portchannel see Section 5. but no guarantees about performance or operation can be made. A dedicated Ethernet network needs to be provided for the GPFS network. neither bandwidth or latency guarantees are needed. This must be discussed well in advance together with the customer networking team.9.Technical Documentation The details of the network setup are not strictly defined. The standard network requirements of a HA solution regarding the customer’s uplink connectivity also apply to DR. Using redundant optical fibres endto-end may improve performance and reliability. each node must be able to reach the tiebreaker node and vice versa.8: Inter-Site Portchannel Configuration on page 36. Only the HANA internal network and the GPFS network are shown. there are no special network requirements as there is only one server. The project team is responsible to work out a soluSite C tion with respect to the customer’s infrastructure (optional) and requirements. There are no other special requirements on this connection. the tiebreaker node must be reachable from within the internal GPFS IPv4 network.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. HANA HANA For the connectivity between the two main sites. Figure 27 on page 69 shows a scenario with four nodes on each site. A GPFS routed or non-dedicated connection may be used. In a solution with a quorum site. which are part of the standard SAP HANA HA offering of Lenovo. Each site will use a standard HA setup with its own dedicated GPFS and SAP HANA network. The basic requirement is to have at least two sites. a third network site is needed if a so called tiebreaker node will be part of the Disaster Tolerance architecture. It is acceptable to use a routed connection through the customer’s internal network as long as it is reliable. This can be provided by using the standard IBM RackSwitch G8264 10 Gbps Ethernet switches. at least one dedicated optical fibre connection endto-end between both sides is recommended. For the tiebreaker node at site C. no uplinks connecting the HANA cluster to the client network. It is up to the project team to develop a solution that is suitable to the customer’s existing network infrastructure. 2x HANA internal node1 GPFS IBM RackSwitch G8264 #4 IBM RackSwitch G8264 #2 node2 node3 node4 40 Gbit ISL GPFS 4 ports from each node: 2x GPFS. IBM RackSwitch G8264 #3 IBM RackSwitch G8264 #1 HANA internal HANA internal 10 Gbit GPFS ISL 40 Gbit HANA internal HANA internal 10 Gbit GPFS 4 ports from each node: 2x GPFS. Performance is not critical for this partition. thus.2: Mixed eX5/X6 DR Clusters on page 97.3 Three site/Tiebreaker node architecture If the customer decides to use a tiebreaker node in a third site. The node must be able to reach all other nodes at both site A and site B of the GPFS cluster.1 Site A and B The hardware setup of the nodes at each site has to be performed as described in section 6: Guided Install of the Lenovo Solution on page 41. The rationale for this node is the split-brain scenario where the connection between the two main sites is lost. when using LVM. we recommend to use the Side-Car Quorum Node x3550 M3/M4 defined in section 10. Although the use of any server is possible. Information given there takes precedence over the instructions below. 8. • Ensure certified hardware is available and connected to power • Verify firmware levels. prevents the primary site from going down for data integrity reasons.Technical Documentation 8.1. the name /dev/dm-X must be used instead of the logical volume name. 8.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. It also provides information about the network has to be set up.9. This document will describe the use of the tiebreaker node and explain the deviations when it is not necessary. They must be identical on all nodes • Configure storage (RAID setup) 8. this server eases some operational procedures by reducing both the time needed for recovery and the likelihood of operating errors.2 Tiebreaker Site C (optional) It is recommended to setup the tiebreaker node according to the description in section 10. The partition can reside on a logical volume (LVM) if desired.1. The solution has been tested in setups with and without this additional node. Additionally. The tiebreaker node must have a small partition (50 MB is sufficient) to hold a replica of the GPFS file system descriptors. 2015 71 .2: Prepare quorum node on page 105. The following list summarises these steps.3 Hardware Setup This section talks about how to physically install System x machines and how to prepare uEFI for HANA. This definition includes the necessary licenses and services required for the tiebreaker node. They must be identical on all nodes • Modify / Check UEFI settings. so. an additional server with an appropriate GPFS license is required.3. This node is optional but recommended for increased reliability and simplicity in the case of a disaster. GPFS must be able to recognize the partition.2 Mixing eX5/X6 Server in a DR Cluster Please read chapter 9.1. It will not contain any data or metadata information. 8.3.2: Prepare quorum node on page 105. However. X6 Implementation Guide 1. The tiebreaker node helps in deciding which site is the active site and. for example. 2015 72 . This can be achieved.3. Ideally. For the link between the switches on both sites refer to the next sections. The GPFS network on both sites should be connected with at least a dedicated 10GBit connection. This is due to SAP HANA being operated in a cold standby mode. whereas.3 Acquire TCP/IP addresses and host names Refer to section 5. this is done before the installation starts at the customer location.1 cluster: Tiebreaker node The following parameters must be available for the installation of the Parameter Hostname IP address for Hostname IP address for GPFS Network Value Table 35: Hostname Settings for DR In case of a new installation these additional parameters are required.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.3: Network Configuration on page 22 which contains a template that can be used to gather all the required networking parameters. 8. A routed network is not recommended as it may have severe impact on the synchronous replication of the data. Conversely.9. 8.Technical Documentation 8.3. while the SAP HANA network must not.3. This means that the GPFS network on both sites will be one subnet and each node can reach all other nodes on both sites.3. the SAP HANA networks on site A and B are isolated from each other. This requires a strict isolation of these two networks. a VPN connection. X6 Implementation Guide 1.4: Network Switch Configuration For Clustered Installations on page 23. both sites will use the same hostnames and IP adresses for SAP HANA. or through a dedicated physical network. The SAP HANA network is separated on both sites. 8.4 Network switch setup (GPFS and SAP HANA network) The setup of the switches used for the GPFS and SAP HANA network is described in section 5. tunneling.3.5 Link between site A and B The GPFS network will be stretched over site A and B. See Table 36 on page 72 Parameter Netmask Default gateway Additional routes DNS server NTP server Value Table 36: Extra Network Settings for DR The tiebreaker node must be able to reach all cluster nodes on both sites with the IP addresses and hostnames used for GPFS (gpfsnodeXX) with which the GPFS cluster uses to communicate internally. the cluster nodes must reach the tiebreaker node with the same host name and IP address. For this reason. via routing. 10. Possible setups include a multi-homed tiebreaker node or static host routes when private address ranges are used.10.x.7 Setup network connection to tiebreaker node at site C (optional) The tiebreaker node at site C needs to be integrated as well into the GPFS cluster.199 netmask 255. add an entry like this to the respective ifcfg-ethX file in /etc/sysconfig/network 1 IPADDR_1='192.10.9.4 Software Setup Note The base installation changed with the advent of the new text based installer which also allows the installation on Red Hat Enterprise Linux.10. 8.3. X6 Implementation Guide 1.199 gw <tiebreaker external ip> 3. Every node in the cluster must be able to contact the tiebreaker node and vice versa.10X gw <external IP nodeN> 4.255.101 gw <external IP node1> # route add -host 192. 1 # ifconfig eth0:1 192. Verify that the newly created alias is reachable throughout the cluster and all nodes can be pinged from the tiebreaker node via the internal GPFS network addresses. the subnet used for GPFS traffic (private or public) and other parameters.6 Network integration into customer infrastructure The network connections in the customer network for SAP HANA access. On the tiebreaker node add the GPFS address as an alias to the NIC attached to the public network e.168. The following is an example for a setup with a GPFS subnet of 192. NAT or router capabilities are further options. backup and other connections depends very much on the customer network and his requirements.10.x range: 1.102 gw <external IP node2> .Technical Documentation 8.x.10.10. General guidance can be found in section 6: Guided Install of the Lenovo Solution on page 41. Add host routes on the tiebreaker node for every node in the cluster.g. This replaces the manual installation described here in earlier releases..255. # route add -host 192. 1 # route add -host 192.168. VPN. 8. 1 2 3 4 # route add -host 192.x and a tiebreaker node with one network adapter and a public IP address in a 10.99/24' 2.168.168. It is up to the project team to come to an agreed solution with the customer.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.. This depends on the configuration of the tiebreaker node (one or more network interfaces).3. 2015 73 .168. Add host routes on every node in the GPFS cluster to this IP alias.168.168.0 To make this permanent. management. 20..104 hananode04 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. In the example with four nodes on each site you have hananode01 to hananode04 at site A and hananode01 to hananode04 at site B.168.106 gpfsnode06 X6 Implementation Guide 1.20. which means in an example with four nodes per site you have gpfsnode01 up to gpfsnode08 (gpfsnode01-04 at site A. The following commands and code snippets use /sapmnt.1XX hananodeXX The tiebreaker node only has a gpfsnode name as it is used solely for GPFS communication 1 192. # Second node on first site: 192..168.168. Each node in the cluster (except the tiebreaker node) has the following two names associated with it 1 2 192.102 gpfsnode02 192.103 gpfsnode03 192.101 gpfsnode01 192.96-13 the mount point for the GPFS file system sapmntdata is user configurable during installation.1.2: Prepare quorum node on page 105 and following to install the base operating system and software.104 gpfsnode04 192. For the optional quorum node. In phase 3 choose the role Cluster Node (Worker) for all servers. # Second node on second site (physically the sixth node) 192. This effectively couples any active SAP HANA node to a backup node on the second site.1 1 2 3 4 5 6 7 8 9 10 11 12 13 Example two sites with four nodes each . 2015 74 .102 hananode02 192.20.20.10. please follow the instructions given in section 10.10.168. For any other path please replace /sapmnt with the chosen path. Ensure that the entry for the local machine is always the first entry in the list.10. 8.10.4.4.168.9. This is required for the installer scripts.168.168.168. while SAP promotes /hana. 8.1XX gpfsnodeXX The GPFS network spans both sites.168. Lenovo currently recommends to use /sapmnt. Do not copy this file from one node to the other as it will break other installation scripts.1 GPFS configuration prerequisites Create /etc/hosts entries for GPFS To ensure communication over the correct network interfaces. which in turn means you should use each hananodeXX entry twice (once per site).168.101 hananode01 192. Please note that in the interim check in section 6.9.1.10. The SAP HANA network is restricted to only one site. define the host entries manually on each node (including the tiebreaker node if available) for the GPFS and SAP HANA networks.20..10.103 hananode03 192.168.168. SAP HANA will be also installed into this path.1XX gpfsnodeXX 192. Install all standard DR servers as described in section 6: Guided Install of the Lenovo Solution on page 41.10. gpfsnode05-08 at site B)..5: Interim Check on page 60 each site is only expected to see only the site-local nodes in the HANA network test.Technical Documentation Note Starting with appliance version 1. After editing the /etc/hosts entries it is a good idea to verify network connectivity. ssh-keyscan -t rsa $node >>←. Please note that the following commands will overwrite any additional SSH key authorizations you may have installed yourself. done Distribute the known_hosts file to the other nodes: 1 # for node in gpfsnode0{2. This is a general GPFS requirement...2 SSH key exchange As GPFS uses SSH.103 192.168.168. ssh-keyscan -t rsa $ip >> /←done Generate a new SSH key for passwordless ssh access.→ /root/.10. execute the following command to list all nodes of the DR clusters attached to the GPFS network: 1 # nmap -sP 192..ssh/authorized_keys root@$node:.9.168. authorize it and distribute it to the other nodes: 1 2 3 # ssh-keygen -q -b 4096 -N "" -C "Unique SSH key for root on DR Cluster" -f /root/. X6 Implementation Guide 1.8} . done A small explanation for the gpfsnode01.→root/.168. as on this node the files are already prepared. do scp /root/.101 192. 2015 75 .20.107 192. do scp /root/.8} .0/24 and execute this command at each site to confirm the SAP HANA network: 1 # nmap -sP 192. To do so.ssh/known_hosts # for ip in 192.8} .102 192..8} . this generates a list of names from gpfsnode01 to gpfsnode08.168.←.168.104 .20.168.ssh/id_rsa /root/.ssh/known_hosts root@${node}:/root/. Run the following commands all from the first node in the GPFS cluster. all other nodes have no hananodeXX entry for this special node. the root ssh keys on all nodes need to be exchanged to allow for password-less SSH connectivity within the cluster.10.0/24 Only the nodes of the site should be listed using the second command. done .168. 8. ..ssh/ .4.pub /root←. replace gpfsnode01.168.ssh/known_hosts .ssh/id_rsa.20.→ssh/id_rsa # cat ~/.→/. hananode02 gpfsnode05 hananode01 gpfsnode07 hananode03 gpfsnode08 hananode04 The optional tiebreaker node only has GPFS addresses.←. do ssh-keygen -R $node . If the host names are non-successive.168..Technical Documentation 14 15 16 17 18 19 20 21 192.10. In our example above.108 192.ssh/id_rsa..→ssh/ .8 with a space separated list of the hostnames. The distribution of the known_hosts file omits the first node.10.pub > ~/. and. This has two consequences: the tiebreaker node only has gpfsnodeXX entries in the /etc/hosts file for all nodes.8 value.1.{1. a tiebreaker node would get allocated gpfsnode99.20.10..20. Generate the known_hosts file on the first node 1 2 # for node in gpfsnode0{1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Verify that you got the correct machines by comparing the displayed MAC addresses with the MAC addresses of the bond1 device on each respective node. do ssh-keygen -R $ip .ssh/authorized_keys # for node in gpfsnode0{1.105 192. 2015 76 . On the first node (which will be the primary configuration server). b) If a quorum node is available.9. One comment regarding the topology vectors.cluster file without Quorum Node gpfsnode01:quorum gpfsnode02:quorum gpfsnode03:quorum gpfsnode04 gpfsnode05:quorum gpfsnode06 gpfsnode07:quorum gpfsnode08 gpfsnode99:quorum (not applicable) Table 37: GPFS Settings for DR Cluster The nodes.cluster and add one line per node containing its GPFS network hostname.1.x Failure group 4 2. according to the following rules: a) Distribute all available nodes (except tiebreaker) in four equal sized groups and append ":quorum" to the first node of each group.x Failure group 5 (tie breaker) 3.x Failure group 3 1.1 nodes.e. If you have 3 nodes in each failure X6 Implementation Guide 1. mark it as quorum. mark the second node of the first group as a quorum. Please contact support. 8.2 GPFS Server configuration Create the necessary configuration files. create a file /var/mmfs/config/nodes. This imposes a security risk and you should consider replacing this key with a new unique key. If applicable. add the tiebreaker node as last node. tiebreaker node) should look like this: 1 2 3 4 5 6 7 8 gpfsnode01:quorum-manager gpfsnode02:quorum-manager gpfsnode03:quorum-manager gpfsnode04: gpfsnode05:quorum-manager gpfsnode06: gpfsnode07:quorum-manager gpfsnode08: Note Adding node designation ’manager’ is optional as quorum nodes are automatically eligible to be chosen as cluster manager.Technical Documentation Note In previously releases of this document the shipped SSH root key was used and distributed among the nodes in the DR-enabled.0. The value of x has to be replaced with the number of the node within the failure group. you should have 5 nodes marked as quorum nodes.cluster file with Quorum Node gpfsnode01:quorum gpfsnode02 gpfsnode03:quorum gpfsnode04 gpfsnode05:quorum gpfsnode06 gpfsnode07:quorum gpfsnode08 nodes. With an example of 8 nodes.0. as they will be used in a later step.4.cluster file for an eight node setup without separate quorum node (i.1. See the following example for an 8 node DR cluster without and with a dedicated tiebreaker node (gpfsnode99): Failure group 1 Topology Vector 1. c) Without a quorum node. Next append ":quorum" (no spaces) to the end of line for some hosts.x Failure group 2 2.0.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. create a file /var/mmfs/config/disk.prefetchAggressivenessWrite=2.list.Technical Documentation group.skipDioWriteLogWrites=1.enableLinuxReplicatedAIO=yes.←.9.cluster -p gpfsnode01 -s gpfsnode05 -C ←.0.. Create the GPFS cluster with the first node of each site as primary (-p) resp.2. Make sure that the pool definitions are only once in this file.3.→enableRepWriteStream=false.maxFilesToCache=4000.. data02node01 for the second HDD block device.5 Disk Definitions For every HDD RAID device /dev/sdb and subsequent devices add a NSD definition like the following template: 1 2 3 4 5 6 %nsd: device=/dev/sdb nsd=data01node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1. For each node add entries as described in the following section. but replace the failureGroup with the correct topology vector for the particular node. Then after adding als device stanzas add these lines unaltered: X6 Implementation Guide 1. You can get a device list with lsscsi. Mark all the quorum nodes (including the optional tiebreaker node) and the configuration servers with a server license and all other nodes as FPO licensed. 8.gpfsnode04.3 GPFS Disk configuration On the first node. secondary server (-s) 1 # mmcrcluster -n /var/mmfs/config/nodes.2. e.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1 GPFS 3.←. Start the GPFS daemon on all nodes 1 # mmstartup -a Apply the following cluster configuration changes 1 2 3 # mmchconfig unmountOnDiskFail=meta -i # mmchconfig panicOnDiskFail=meta -i # /usr/bin/yes 999 | /usr/lpp/mmfs/bin/mmchconfig dataStructureDump=/tmp/GPFSdump.readReplicaPolicy=local.. 2015 77 .1.. and the number of the nodes is from 1 to 3 in each failure group.data. Please take care of your actual licensing.←. then the second node in the first failure group will be 1.1 pool=system Please don’t forget to increment the first number in the nsd line.4.→pagepool=4G. 1 2 # mmchlicense server --accept -N gpfsnode01.0...gpfsnode02.fs.maxMBpS=2048.nsdThreadsPerDisk=24 After this last command you need to restart GPFS with 1 2 # mmshutdown -a # mmstartup -a 8.4.gpfsnode99 # mmchlicense fpo --accept -N gpfsnode03.→nsdInlineWriteMax=1M. the second node in the third failure group will be 1.g.→HANADR1 -A -r /usr/bin/ssh -R /usr/bin/scp Mark all nodes as licensed. server with the correct values where necessary. nsd name.9.list.fs -v no Create the filesystem 1 # mmcrfs sapmntdata -F /var/mmfs/config/disk.0. 8. 2015 78 .1 pool=system Replace device.4.4 Filesystem Creation Create the NSDs 1 # mmcrnsd -F /var/mmfs/config/disk.fs starting with %nsd: device= accordingly.data.→v no -m 3 -M 3 -r 3 -R 3 -j hcluster --write-affinity-depth 1 -s ←.data.→failureGroupRoundRobin --block-group-factor 1 -Q yes -T /sapmnt Create filesets 1 2 3 # mmcrfileset sapmntdata hanadata -t "Data Volume for HANA database" # mmcrfileset sapmntdata hanalog -t "Log Volume for HANA database" # mmcrfileset sapmntdata hanashared -t "Shared Directory for HANA database" Mount the filesystem on all nodes 1 # mmmount sapmntdata -a To verify the file system is successfully mounted execute 1 # mmlsmount sapmntdata -L Link the filesets in the filesystem 1 2 3 4 5 6 # # # # # # mmlinkfileset sapmntdata hanadata -J /sapmnt/data chmod 755 /sapmnt/data mmlinkfileset sapmntdata hanalog -J /sapmnt/log chmod 755 /sapmnt/log mmlinkfileset sapmntdata hanashared -J /sapmnt/shared chmod 755 /sapmnt/shared X6 Implementation Guide 1.list.Technical Documentation 1 2 3 4 5 6 7 8 %pool: pool=system blockSize=1M usage=dataAndMetadata layoutMap=cluster allowWriteAffinity=yes writeAffinityDepth=1 blockGroupFactor=1 When using a tiebreaker node add the following lines to the stanza file: 1 2 3 4 5 6 %nsd: device=/dev/sda3 nsd=desc01node99 servers=gpfsnode99 usage=descOnly failureGroup=3.data.fs -A no -B 512k -N 3000000 -←.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. If your setup includes a tiebreaker node determine the device name of the partition allocated for the descriptor-only NSD and change the line in disk.list. The roles (worker or standby) are not important.1 Install SAP HANA software on backup site Please install SAP HANA on the backup site as described in the official SAP documentation available here: http://help.1 Install HANA on backup site Before continuing with the installation make sure that the GPFS file system sapmntdata is mounted at /sapmnt.1. 8. hostname hananodeXX). except that the first one needs to be a worker. We recommend to install SAP HANA on the backup site first and thereafter on the primary site. as this installation type is faster. The location of the SAP HANA installation files is /var/tmp/saphana.4. X6 Implementation Guide 1.5.sap.4.1. 8. The host based routing used in the HA solution is not applicable for the DR solution.4.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. We recommend to install all other nodes as standby. 8.4. In order to prepare the backup site.com/hana_appliance.9.2 Stop HANA and SAP Host agent on backup site and stop SAP HANA: 1 Log in as <SID>adm on one node $ HDB stop Then log in as root and stop SAP Host agent and other services: 1 # /etc/init.5. This is safer to install because your backup site installation cannot accidentally make changes to your production environment.Technical Documentation Set a quota on the hanalog fileset The formula for the log quota in a DR scenario is: <# of active nodes> * RAM * <# of GPFS replicas> Example: In a 7+7 scenario with L nodes using 6 worker nodes and 1 standby 6 * 1024G * 3 = 18432G Set the quota 1 # mmsetquota -j hanalog -h 18432G -s 18432G /dev/sapmntdata 8.d/sapinit stop Afterwards disable the autostart of the sapinit service 1 # chkconfig sapinit off Do the last two steps on all backup nodes. 2015 79 .5 SAP HANA appliance installation Warning SAP HANA in this DR solution must be installed using the hostname of the HANA-internal network (usually on bond1.5. it is necessary to do a standard HANA installation and then delete the installed content on the shared filesystem. 1.com/hana_appliance.4 Disable mmfsup script on backup site nodes An installation with the Recovery Image will install a mmfsup script which will automatically start SAP HANA after the file system comes up. X6 Implementation Guide 1.Technical Documentation 8. You can specify the id’s in the SAP HANA Installation process either over a configuration file or a commandline parameter.3 Delete SAP HANA shared content The purpose of this installation is to install the node local parts of a SAP HANA system. 1 # chmod 644 /var/mmfs/etc/mmfsup Note In previous releases of this document the mmfsup script was deleted. Install SAP HANA with the same parameters as on the backup site.sap.4. 2015 80 . 8. five worker and one standby node in a six node per site solution. Remove it on all cluster nodes.4.4.) The script resides in /var/mmfs/etc. This is very important for DR to work properly. Disable it on all cluster nodes.4.9.3 Disable mmfsup script on production site nodes An installation with the Recovery Image will install an mmfsup script which will automatically start HANA after the file system comes up. for example. This must be deactivated as it may start SAP HANA on both sites (using the same hostnames. GID on all nodes. After the installation finished deactivate the autostart of SAP Services 1 # chkconfig sapinit off Please verify that the user <SID>adm and the group SAPSYS have the same UID resp. Use the command 1 # id <SID>adm and compare the numerical IDs of <SID>adm and group sapsys.5.) The script resides in /var/mmfs/etc. The location of the SAP HANA installation files is /var/tmp/saphana.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.5. you find details in the SAP documentation: SAP HANA Server Installation and Update Guide.5. 8. This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.1. 1 # chmod 644 /var/mmfs/etc/mmfsup Note In previous releases of this document the mmfsup script was deleted. This is not necessary as disabling the script is sufficient and will keep the file for future use. After installing SAP HANA on all backup site nodes the data in /sapmnt must be deleted: 1 2 3 # rm -r /sapmnt/data/<SID> # rm -r /sapmnt/log/<SID> # rm -r /sapmnt/shared/<SID> 8. This is not necessary as disabling the script is sufficient and will keep the file for future use.2 Install HANA on primary site Now install SAP HANA again on the primary site as described in the official SAP documentation available here: http://help.5. Please make sure that you install the individual HANA nodes with the correct roles. 6: Quorum Node IBM GPFS setup on page 108 and 10. 8.6.4.2: Prepare quorum node on page 105 excluding the setup of the switches which does not apply to a DR configuration.2 Tiebreaker node setup using an existing node If an existing node will be used as the tiebreaker node.4.1.7. Note: This may require the installation of the kernel header files / sources and some development tools (compiler. Otherwise. This is checked with 1 # mmlscluster • Verify distribution of quorum nodes The current active quorum setup can be checked with 1 # mmgetstate -aLs The cluster configuration is listed with 1 # mmlscluster When using the tiebreaker node check that the tiebreaker node is a quorum node and that the remaining quorum nodes are distributed evenly among the other file system failure groups.) • Setup network access to all other GPFS cluster nodes in the GPFS network • Exchange ssh keys so that the tiebreaker node root account can be accessed without a password from the other GPFS cluster nodes.1 Quorum node setup using a new node The setup of a new server can be done by following the instructions in section 10. make.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.4.1.4.4. fail-over to the standby site will not work. You see the failure groups with 1 # mmlsdisk sapmntdata X6 Implementation Guide 1.9.6 Tiebreaker node setup 8. Follow the instructions in sections 10.7: Quorum Node IBM GPFS installation on page 108. 8.. General information how to install and setup GPFS can be found online in the Information Center section Installing GPFS on Linux nodes.6.1.7 Verify Installation 8. 2015 81 .1 GPFS Cluster configuration • Verify that all nodes are up and running 1 # mmgetstate -a • Verify distribution of the configuration servers The primary and secondary GPFS configuration servers must each be on one site.Technical Documentation 8. please consult the system administrator and ask him to: • Provide a partition which will be used for to hold the GPFS file descriptor information • Install GPFS • Build the GPFS portability layer.. 1 # mmchdisk sapmntdata start -a If disks are suspended you can resume them all with the following command: X6 Implementation Guide 1.9. two local and one remote copy) 1 # mmlsfs sapmntdata Verify that the following values are all set to 3: 1 2 3 4 -m -M -r -R Default Maximum Default Maximum number number number number of of of of metadata replicas metadata replicas data replicas data replicas • Test replication factor 3 Write a new file to the shared filesystem and verify replication level applied to this file: 1 # mmlsattr <path to file> All values must be set to 3 and no flags (like illbalanced. hardware failure. metaupdatemiss. In general. the cluster manager must be on the passive/backup site..1 should be in the file system.) and restart them once the problem has been resolved. If you are using the tiebreaker node. a fifth failure group 3. • Verify cluster manager location Verify the location of the cluster manager depending on the use of the tiebreaker node 1 # mmlsmgr If the solution uses a tiebreaker node.4.1.2: GPFS Server configuration on page 76.) must be shown. This has no effect on already started disks. in a solution without a tiebreaker node. • Disk availability All GPFS disks must be online.. • Check failure groups You should have four failure groups 1. If not using the tiebreaker make sure that the active site has at least one more quorum node than the passive site.1. system reboot.0. Get the list of failure groups from the disk list 1 # mmlsdisk sapmntdata Make sure that the server nodes are distributed evenly among the failure groups. 1 2 # mmlsdisk sapmntdata -e All disks up and ready If there are disks down or suspended.x. try to keep an odd number of quorum nodes. Please check the GPFS documentation or ask IBM GPFS support if there are flags shown after restripe.x 2.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 82 .1.Technical Documentation Information about the failure group setting can be found in section 8. .x 1. etc.x and 2. To change the cluster manager issue 1 # mmchmgr -c <node> • Verify replication factor 3 (= three copies.0. check the reason (eg. the cluster manager must be on the active site. The following command will try to start all disks in the file system. done The value gpfsnode01.109 gpfsnode09 192. replace this with a space separated list of host names. do echo node $srcnode . The following sections will only explain the differences from the basic DR installation in the sections before.x) with topology vector 1. it is supposed to be reachable only from nodes on the same site. ssh-keygen -R $target . .. add host entries for the the GPFS network.2: Mixed eX5/X6 DR Clusters on page 97.5 Extending a DR-Cluster This section describes how to grow a DR cluster. Let’s assume that the new nodes are the 9th and 10th nodes with node09 going to the active site and 10 into the backup site. Ping the new machine on the HANA network from all servers. ssh $srcnode 'for ←.0.6. The same applies to gpfsnode09.3.0. X6 Implementation Guide 1. 8. do echo -n $target .2 GPFS Part 1 1. Run on any existing node 1 # for srcnode in gpfsnode0{1. 8. Growing a DR enabled cluster requires that both sites grow by the same number of nodes.g.ssh/known_hosts . First add host keys for the new nodes to the existing machines. so that a failure group has at most one more node than the other. On all existing nodes. 8.10 which are the new nodes in this example. if the host names differ or are not consecutive. 2015 83 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.→ssh-keyscan -t rsa target >> /root/.8 will generate a list from gpfsnode01 to gpfsnode08.1 Hardware Setup Please refer to 8.→target in gpfsnode0{9. In the example above.Technical Documentation 1 # mmchdisk sapmntdata resume -a Note Follow the instructions in Section 7: After Installation on page 66. The overcautious technician may also decide to install the backup node prior to the active node. but it’s highly recommended.e.1. First step is to add /etc/hosts entries on every machine. the 9th node will go into failure group 1 (1.3 and the 10th node will go into failure group 3 (1.10} .6 Mixing eX5/X6 Server in a DR Cluster Please read chapter 9. Ping the new machine on the GPFS network from all machines to test if the network configuration is correct.8} .168.1.1.110 gpfsnode10 On the new nodes add entries for all other nodes.x) getting the topology vector 1. Copying the entries from one of the existing nodes is the easiest way. 8. Distribute any new nodes evenly into the existing failure groups (topology). ←. Information given there takes precedence over the instructions below.: 1 2 192. done '.6.3: Hardware Setup on page 71 and follow the instructions there. In general the installation of each active/backup server couple needs not to be done at the same time. put the backup server into the corresponding FG on the backup site.168.9.1.. Issue these command on one of the existing cluster nodes: 1 2 # scp /root/.msg.. ssh-keygen -R $node . Build the GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/src/README: 1 2 3 4 # # # # cd /usr/lpp/mmfs/src make Autoconfig make World make InstallImages 6. 2. do echo -n $node .rpm 3. To add the new nodes to the cluster run on any running node 1 # mmaddnode -N gpfsnode09.→$target hostname . Verify your GPFS installation: 1 # rpm -qa | grep gpfs The installed packaged from above should be listed here.en_US-<GPFS-RELEASE>-X. ssh-←.→root@gpfsnode09:/root/. Install GPFS (base package): 1 2 # cd /var/tmp/install/gpfs-<GPFS-RELEASE> # rpm -ivh gpfs.→root@gpfsnode10:/root/.x86_64..5. X6 Implementation Guide 1.docs-<GPFS-RELEASE>-X..rpm 4. ssh ←. 2015 84 .base-<GPFS-RELEASE>-0. 5.ssh/id_rsa. done The command should run without interaction and errors. Server and FPO are just examples.→$srcnode 'for target in gpsfnode{01.0-17 or higher.pub ←.10} .ssh/ # scp /root/. Update to the latest GPFS Maintenance Release Warning It is highly recommended to upgrade to GPFS 3.gpfsnode10 Please use the correct licensed for the nodes.ssh/id_rsa /root/.→keyscan -t rsa $node >> /root/.ssh/known_hosts . do echo from node $srcgpfsnode . do echo To node $target .9.pub ←.ssh/authorized_keys /root/.Technical Documentation Then copy the root SSH key to the new news.rpm # rpm -ivh gpfs.noarch. done '.noarch.ssh/ On all new cluster nodes run this command 1 # for node in gpfsnode{01.10} .ssh/id_rsa /root/.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.rpm # rpm -ivh gpfs.ssh/authorized_keys /root/. Install the following three packages for the latest (X) maintenance release: 1 2 3 # rpm -ivh gpfs.ssh/id_rsa. Mark the servers as licensed: 1 # mmchlicense fpo --accept -N gpfsnode09. done Test the SSH key exchange by runnign this command on any node 1 # for srcnode in gpfsnode{01.gpl-<GPFS-RELEASE>-X. ssh ←.10} .gpfsnode10 7.noarch. gpfsnode10 9. 1. user names. Stop HANA and SAP Host agent on backup site Log in as <SID>adm on one node and stop SAP HANA: 1 $ HDB stop Then log in as root and stop SAP Host agent and other services: 1 # /etc/init.Technical Documentation 8.list.3 # mmcrnsd -F /var/mmfs/config/disk. You only need to create entries for the drives on the new nodes and you can omit the pool configuration entries.←.gpfsnode0910.→--write-affinity-depth 1 -s failureGroupRoundRobin --block-group-factor 1←. paths as in the original DR-HANA installation.data. You can use the command id to query user and group information. 3.6.9. group names and group IDs. Create NSDs 1 8.data.sap. RAM based filesystems are not sufficient. Make sure to use exact the same SAP SID. Install HANA on backup site In order to prepare the backup site. so we use the fresh created NSDs for a temporary filesystem. Mount this filesystem on all new backup nodes 1 mmmount sapmnttmp /sapmnt -N <new backup nodes> 2.→ -Q yes Before continuing with the installation make sure that the GPFS file system sapmntdata is not mounted at /sapmnt on the new nodes. A tool to automate this procedure is currently in development by SAP.gpfsnode0910 HANA Backup Node Installation Skip this for a node on the active site.data.→gpfsnode0910 -A no -B 1M -N 3000000 -v no -m 1 -M 3 -r 1 -R 3 j hcluster ←. For the HANA installation on the backup site.4.d/sapinit stop Afterwards disable the autostart of the sapinit service X6 Implementation Guide 1. SAP instance number. 10. Create a temporary filesystem 1 /usr/lpp/mmfs/bin/mmcrfs sapmnttmp -F /var/mmfs/config/disk. Create the disk descriptor files. install the backup instance. The location of the SAP HANA installation files is /var/tmp/saphana.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Start the new nodes 1 # mmstartup -N gpfsnode09. it is necessary to do a standard HANA installation and then delete the installed content on the shared filesystem. Please see chapter 8. user IDs.com/hana_appliance. Before adding the disks to the shared file system.list. Let us assume the new file is /var/mmfs/config/disk. you must create the disk descriptor or stanza files. 2015 85 . Do a single node installation on each node.3: GPFS Disk configuration on page 77 for a description of the stanza files.list. but it is preferably done on the node where the files for the initial cluster creation are located. You can create them on any node on the cluster. Install SAP HANA on the backup site as described in the official SAP documentation available here: http://help. and destroy the temporary filesystem afterwards before continuing with the installation. we need a temporary filesystem which must satisfy some requirements. If not already installed.5 HANA 8.list. Disable mmfsup script on backup site nodes An installation with the Recovery Image will install a mmfsup script which will automatically start SAP HANA after the file system comes up. This must be deactivated as it may start SAP HANA on both sites (using the same hostnames. 4.9. 1 # rm /var/mmfs/etc/mmfsup 6. install the SAP host agent 1 2 # cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X64 # rpm -ihv saphostagent. mount the GPFS file system 1 # mmmount sapmntdata -N gpfsnode09. Remove it on all cluster nodes. unmount temporary Filesystem on all nodes 1 mmmumount sapmnttmp -a and delete it 1 mmdelfs sapmnttmp This will delete all shared HANA content and will leave the node specific HANA parts installed.5.) The script resides in /var/mmfs/etc.6. Add disks to sapmntdata filesystem 1 # mmadddisk sapmntdata -F /var/mmfs/config/disk.gpfsnode10 GPFS setup is now complete. Mount GPFS on active On the new active nodes and only on these. Verify NSD status Verify that all NSDs are up and running 1 # mmlsdisk sapmntdata 3.rpm As recommended by the RPM installation. 8.6.6. a password for sapadm may be set. 8.4 GPFS Part 2 1. 1 # mmlsmount sapmntdata -L 2.Technical Documentation 1 # chkconfig sapinit off Do the last two steps on all backup nodes.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1. Please make sure that you have mounted the shared file system on the new nodes. Delete temporary filesystem After installing all new backup nodes.1 Install HANA on active site 1.gpfsnode0910 2.data. Delete SAP HANA shared content 5. 2015 86 . 2 Architectural overview The following illustration shows you how IBM’s solution for SAP HANA DR with storage expansions looks like: The expansion storage is visible as local storage only and connected via the SAS interface. then you must have storage expansion connected also to all primary site nodes. In the event of a disaster. This storage expansion will remain unused until you actually need to move data away from DR-site nodes which are now being used to host SAP HANA production instances.1 Prerequisites The use of a storage expansion is only supported in a DR scenario.7. X6 Implementation Guide 1. If the customer considers both participating data centers to be equal (which means that after a fail-over of his production instances to the DR-site he will not manually fail production back to his site A data center). hostname hananodeXX).9. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration Guide". 2015 87 . Furthermore.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Having only a subset of the DR-site nodes equipped with storage expansions is not a supported environment.1. when the backup site becomes the active site. 8. All nodes on the DR-site must have a storage expansion connected. Warning SAP HANA in this DR solution must be installed using the hostname of the HANAinternal network (usually on bond1. all expansions must have identical disk drives installed. SAP is tolerating to run a non-productive SAP HANA instance on those nodes. Execute: 1 # chkconfig sapinit off 4. 8.7. all nodes at one of the two sites are only receiving data from the active site and store them on their local disks.7 Using Non Productive Instances on Inactive DR Site IBM supports the installation of storage expansions in a DR scenario to allow clients to run a nonproductive SAP HANA instance on idling DR-site nodes. The storage is not shared by multiple nodes. 8. A storage expansion is used to provide enough local storage for those non-productive instances.1. all non-productive SAP HANA instances have to be shut down to allow production to continue to run. Running SAP’s startup script during system boot must be deactivated as it will will be executed by a GPFS startup script after cluster start. The host based routing used in the HA solution is not applicable for the DR solution.1 Architecture This section briefly explains how IBM enables the use of idling DR-site nodes to run non-productive SAP HANA instances. 8.Technical Documentation 3. During normal operation in a DR scenario. Deactivate automatic startup through sapinit at startup. No expansions can be used when running in an HA environment unless being part of the certified server models. The local disks of the nodes are used for production data.7. For details. IBM does not enable quotas on the new expansion box file system. This means.2 Setup This section assumes that the nodes have been successfully installed with an operating system already (as required for a backup DR site).2.. 8. 2015 88 .. Configure the drives as described in the section 6: Guided Install of the Lenovo Solution on page 41. .7. Make sure to have either a valid backup procedure in place or to regularly delete old backups.. . This.architectural overview Attention The external storage can only be used to host data of non-productive SAP HANA instances...7... however. outages of a single node can be handled and no data is lost.. There will be exactly one new file system spanning all DR-site expansion box drives.1. X6 Implementation Guide 1. has to be done on the same file system. 8. The storage must not be used to expand space of the production file system or to store backups. Either reboot or rescan the SCSI bus and verify that Linux recognizes the new drives.Technical Documentation node1 Site A node2 node3 node4 node5 node6 Site B node7 node8 HDD HDD HDD HDD HDD HDD HDD fio fio fio fio fio fio fio fio third replica meta data Production file system second replica first replica HDD sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS OS OS OS OS OS RAID Ctrl RAID Ctrl RAID Ctrl RAID Ctrl First replica Second replica .g. on some DR-site nodes a QA environment and on other DR-site nodes development. e.7..96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Second file system spanning only expansion box drives (metadata and data) Figure 30: SAP HANA DR using storage expansion .9. see the EXP2524 Installation Guide.3 Architectural comments IBM only support running GPFS with a replication factor of 2 for the non-productive instance. While we do not support a multi SID configuration it is a valid scenario to run. 8. We do not support a replication factor of 3 because the scope of non-productive SAP HANA environments does not include disaster recovery.1 Hardware setup Connect the EXP2524 SAS port labeled ’In’ to one of the M5120 or M5225 ports. . 2. 1.2. On each DR-site node.txt -A no -B 512k -N 3000000 -v no -←. sdab. Create additional NSDs For all new expansion drives. Linux wraps around and continues with sdaa. collect the device names of all expansion drives.9.→m 2 -M 2 -r 2 -R 2 -j hcluster --write-affinity-depth 1 -s ←. When using the M5225 Controller you can get the drive names with the this command: 1 # lsscsi |grep "M5225" |grep -o -E "/dev/sd[a-z]+" or execute following command in case M5120 Controller is used: 1 # lsscsi |grep "M5120" |grep -o -E "/dev/sd[a-z]+" You will end up with something like: 1 2 3 4 /dev/sde /dev/sdf /dev/sdg /dev/sdh for each of DR-site node.txt 3. Note: After sdz. Even if your setup includes expansions on the primary site.txt.2 GPFS configuration You reuse the existing GPFS cluster and create a second file system spanning only the expansion drives of the DR-site nodes. Then create NSDs using those disks 1 # mmcrnsd -F /tmp/nsdlistexp.7. execute the procedure only on the DR-site expansions.Technical Documentation 8. 04-06 are secondary site/DR-site nodes) 1 2 3 4 5 6 7 8 9 10 11 12 /dev/sde:gpfsnode04::dataAndMetadata:4:ext01gpfsnode04:system /dev/sdf:gpfsnode04::dataAndMetadata:4:ext02gpfsnode04:system /dev/sdg:gpfsnode04::dataAndMetadata:4:ext03gpfsnode04:system /dev/sdh:gpfsnode04::dataAndMetadata:4:ext04gpfsnode04:system /dev/sde:gpfsnode05::dataAndMetadata:5:ext01gpfsnode05:system /dev/sdf:gpfsnode05::dataAndMetadata:5:ext02gpfsnode05:system /dev/sdg:gpfsnode05::dataAndMetadata:5:ext03gpfsnode05:system /dev/sdh:gpfsnode05::dataAndMetadata:5:ext04gpfsnode05:system /dev/sde:gpfsnode06::dataAndMetadata:6:ext01gpfsnode06:system /dev/sdf:gpfsnode06::dataAndMetadata:6:ext02gpfsnode06:system /dev/sdg:gpfsnode06::dataAndMetadata:6:ext03gpfsnode06:system /dev/sdh:gpfsnode06::dataAndMetadata:6:ext04gpfsnode06:system Store as /tmp/nsdlistexp..96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Create file system 1 # mmcrfs /dev/sapmntext -F /tmp/nsdlistexp. 2015 89 . . create NSDs according to the following rules: (a) all NSDs will be dataAndMetadata (b) all NSDs go into the system pool (c) naming scheme is extXXgpfsnodeYY with XX being the two-digit drive number and YY being the node number (d) One failure group for all drives within one expansion box Example: three M-size nodes with 32-drive expansion (gpfsnode01-03 are primary site nodes.→failureGroupRoundRobin --block-group-factor=1 -T /sapmntext X6 Implementation Guide 1. The primary site expansion drives will not be used in the beginning.. install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration Guide". 16 SAP Note 1650046 (SAP Service Marketplace ID required) X6 Implementation Guide 1. Take care to install HANA on /sapmntext and not on /sapmnt. Mount file system on DR-site nodes only. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration Guide". When configuring a clustered configuration by hand.9.Technical Documentation Warning Be sure to use nsdlistexp. Also take care that you don’t use the UID (user id) and GID (group id) of the DR HANA instance especially when installing non-productive HANA instances before installing the DR instance. If you have expansion boxes connected also to your primary site nodes. 2015 90 .txt and not your list with internal drives! Using the wrong drives can destroy your production data! 4. 1 # mmmount sapmntext -N [list of DR-site nodes] 5. See the Lenovo SAP HANA Appliance Operations Guide 16 for details.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. they get activated only when you need to migrate non-productive SAP HANA instances’ data away from DR-site notes. 1 Definition & Overview A mixed eX5/X6 cluster is a System x Solution for SAP HANA cluster consisting of eX5 based servers (Intel Westmere. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance X6 Implementation Guide 1. The minimum supported GPFS versions for the cluster are GPFS 3. 256GB RAM) AC34S512C (4 CPUs. 9.2 Prerequisites Before deploying any X6 server to an eX5 cluster.6. Another term used is "hybrid cluster". Appliance versions 1. Do not use plain 3.8) which may require an update even on the X6 nodes. Future releases will level the differences. The RAID5 setup is perceived as being more reliable and convenient than the previously used RAID0 configuration. Such an X6 node is considered to be configured in legacy or compatibility mode.0-17 without eFix 8! It is required to use only eX5 servers installed with appliance version 1. 1024GB RAM) Table 38: eX5 T-Shirt Size to X6 Model Mapping 9.60-7 and later contain a helper script for calculating the necessary file system quotas. and Intel Haswell CPUs must not be installed! 9. 2015 91 . Contact IBM support to obtain this eFix.6. MT 3837 and 6241). 7143-H3X or 7143-HBX) X6 Server Model AC32S256C (2 CPUs.60-7 or later for the eX5 servers. When installing a new cluster please use appliance version 1. and Intel Ivy Bridge CPUs.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 cluster is limited by the number of eX5 servers within that cluster.5 PTF 19 (3.0.1. which introduced RAID5 in cluster configurations.1 PTF 8 (4. 7143-H2X or 7143-HBX) L (x3950. Due to the new storage layout for X6-only installations.5.2. The number of X6 server must always be less than the number of eX5 nodes. there are some minor configuration changes between the older Westmere appliance releases and the first X6 appliance versions. the only supported options are either to increase the number of eX5 server so that they are still the majority or to switch to a pure X6 cluster which requires a reinstallation. Generation 1) S (x3690. 7147-HBX.2 Prerequisites & Limitations 9.1. If you plan to use more X6 servers in a cluster.Technical Documentation 9 Mixed eX5/X6 Environments 9.5. Alternatively PTF 17 (3.1.1.5.1 Mixed eX5/X6 HA Clusters Attention This chapter only applies to hybrid clusters consisting of servers with Intel Westmere.60-7 or later.0-17) with eFix 8 can be used. 7147-H3X. These will be explained below. Generation 2) M (x3950. MT 7143 and 7147) and X6 based servers (Intel Ivybridge. Besides the different storage layout. an X6 configuration must be slightly modified before an X6 node can be added to an eX5 cluster.9.2.1. Hybrid clusters with a mix of Intel Westmere.0-19) or GPFS 4. For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement: eX5 T-Shirt Size SSD (x3690. 512GB RAM) AC48S1024C (8 CPUs. the GPFS filesystem software on the eX5 servers must be updated to the same version installed on the X6 models.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.6. This is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd RAID array in the external SAS enclosure (AC34S512C) resp. Do not execute the script with the Cluster (Master) option.1. the installation and operation instructions for eX5 and X6-based servers remain valid. Do not run the cluster configuration on an X6 machine as this will result in a misconfigured cluster. storage is divided into two GPFS storage pools. please use the installation description in Lenovo eX5 Systems Solution for SAP HANA Implementation Guide.1. 9. This means the script is only called once. After finishing the base installation in phase 2.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1.70-8 the script and the callback are no longer installed and replaced by a GPFSinternal restripe mechanism. It is safe to install the whole cluster including the X6 servers from any eX5 node. For eX5 servers. The new X6 servers must provide these two storage pools in order to be compatible. If this script is not available.1. Since Appliance version 1.sh with the Cluster (Worker) option.3.7. For the installation of the X6 server. In clusters based on x3950 models. the GPFS stanza files need to be adapted to the older eX5 storage layout. The GPFS-internal restripe is enabled by setting the cluster configuration value restripeOnDiskFailure=yes.data. Please set the nsd.9.1.2 Adapting the GPFS stanza file After configuring the base system and the subsequent reboot in phase 2 of the installation.3 New Installation In general.60-7 installed a script which attempts to start all NSDs and restripes the GPFS filesystem if any NSD was not up.→ system2 ext2 "" 1675 3350 For SSD/S sized clusters this is not necessary.gpfsnode*) on all X6 nodes and change the usage and pool parameters as shown in table 39: Stanza file for X6 servers in eX5 clusters on page 93. Follow the Implementation Guide until (including) the call of the script saphana-setup-saphana. in the upper storage book (AC48S1024C) to the storage pool hddpool. please use the Lenovo X6 Systems Solution for SAP HANA Implementation Guide for System x X6 Servers and read the instructions below. login to the server and run 1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←.3. Please read these instructions before installing the new server and take care to implement them correctly. Edit the stanza file (/var/mmfs/config/disk. servers and failureGroup to their correct values.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level. 9. Since appliance version 1.7.list. This script was installed as a GPFS callback which gets triggered upon every node start.6. please calculate the quotas manually following the instructions in the appendix of the eX5 Operations Guide. For S/SSD model based cluster no change is needed as these models use only one GPFS storage pool like the new X6 models. 9.3.70-9 an updated quota calculation help script is installed which can detect a hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes. Complete the installation as described in the eX5 Implementation Guide and run phase 3 (of the cluster configuration) from any eX5 node. 2015 92 .Technical Documentation version.3 Enable automatic restripe for whole cluster eX5 models up to appliance software version 1. 9. 96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 93 .Technical Documentation Model 1 2 AC32S256C (S/SSD) 3 4 5 6 Generated File Change To %nsd: device=/dev/sdb nsd=data01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system (no change required) 1 2 3 1 2 3 4 5 AC34S512C (M) 6 7 8 9 10 11 12 %nsd: device=/dev/sdb 4 nsd=data01node04 5 servers=gpfsnode04 6 usage=dataAndMetadata 7 failureGroup=1004 8 pool=system 9 %nsd: device=/dev/sdc 10 nsd=data02node04 11 servers=gpfsnode04 12 usage=dataAndMetadata 13 failureGroup=1004 14 pool=system 15 16 17 18 1 2 3 1 2 3 4 5 AC48S1024C (L) 6 7 8 9 10 11 12 %nsd: device=/dev/sdb 4 5 nsd=data01node04 6 servers=gpfsnode04 usage=dataAndMetadata 7 8 failureGroup=1004 9 pool=system %nsd: device=/dev/sdc 10 11 nsd=data02node04 12 servers=gpfsnode04 usage=dataAndMetadata 13 14 failureGroup=1004 15 pool=system 16 17 18 %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1004 pool=hddpool %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1004 pool=hddpool Table 39: Stanza file for X6 servers in eX5 clusters X6 Implementation Guide 1.9. sh script only on eX5 nodes or X6 nodes installed with appliance version 1. After Phase 2 (the basic configuration) for X6 nodes in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP HANA.Technical Documentation In a mixed cluster you must delete the callback and enable the new GPFS internal restripe.list. Each fileset is limited with a quota.70-9 or later.1. SAP HANA must be installed into /sapmnt as described in the eX5 documentation. 9. trace.→ system2 ext2 "" 1675 3350 For SSD/S sized clusters this is not necessary.7. please install the X6 nodes according to the X6 Implementation Guide. the two fileset setup is used on all nodes.9.gpfsnode*) on the X6 nodes and change the usage and pool parameters as shown in table 40: Stanza file for X6 servers in eX5 clusters on page 95. Login to the server and run 1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←.2 Adding a cluster node. Afterwards either run the quota calculation script from any eX5 nodes.7.7.5 Deviating Operation Instructions In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers. The usage of this script is also documented in the quota chapter in the appendix.1. so for the quotas the eX5 version of the Operations Guide is applicable. Follow the normal instructions given in the eX5 Operations Guide in chapter 4. servers and failureGroup to their correct values. Deactivate the callback and enable the automatic restripe with the following commands: 1 2 # mmdelcallback start-disks-on-startup # mmchconfig restripeOnDiskFailure=yes Both commands need to be run only once on any active cluster node.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 9.sh. config.7. from any X6 node installed with appliance version 1. Please set the nsd. When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster.4 Existing Cluster Extension/Node Replacement When expanding a mixed cluster with additional eX5 servers. The quota calculation is explained in the appendix of the guide. please follow the instructions in the eX5 Implementation & Operations Guides. Do not run the quota calculator on any X6 node installed with appliance version 1.5. No special handling is required besides using the saphana-quota-calculation.data. 9. Afterwards adapt the generated stanza file on each node before adding these node to the cluster.70-8. 2015 94 .1. When using X6 servers in a eX5 cluster.70-9 or later or do the manual calculation described in the appendix section of the eX5 Operations Guide. On any eX5 node and on X6 nodes with appliance version 1. 9.73-9 you can use the quota calculation script saphana-quota-calculator. X6 servers use three filesets for separating HANA data volumes.1.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation of HANA data volumes and log files.5. backups). Edit the stanza file (/var/mmfs/config/disk. X6 Implementation Guide 1. log files and the shared parts (like binaries. 9. 2015 95 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation Model 1 2 AC32S256C (S/SSD) 3 4 5 6 Generated File Change To %nsd: device=/dev/sdb nsd=data01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system (no change required) 1 2 3 1 2 3 4 5 AC34S512C (M) 6 7 8 9 10 11 12 %nsd: device=/dev/sdb 4 nsd=data01node04 5 servers=gpfsnode04 6 usage=dataAndMetadata 7 failureGroup=1004 8 pool=system 9 %nsd: device=/dev/sdc 10 nsd=data02node04 11 servers=gpfsnode04 12 usage=dataAndMetadata 13 failureGroup=1004 14 pool=system 15 16 17 18 1 2 3 1 2 3 4 5 AC48S1024C (L) 6 7 8 9 10 11 12 %nsd: device=/dev/sdb 4 5 nsd=data01node04 6 servers=gpfsnode04 usage=dataAndMetadata 7 8 failureGroup=1004 9 pool=system %nsd: device=/dev/sdc 10 11 nsd=data02node04 12 servers=gpfsnode04 usage=dataAndMetadata 13 14 failureGroup=1004 15 pool=system 16 17 18 %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1004 pool=hddpool %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1004 pool=hddpool Table 40: Stanza file for X6 servers in eX5 clusters X6 Implementation Guide 1. 1.5. the Implementation & Operation Guides for eX5 are fully applicable. For X6 based nodes please use the Operation Guide for X6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1. Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.1.4: Existing Cluster Extension/Node Replacement on page 94 must be used. The only difference in handling is that the stanza files given in 9.3 Storage Device Failure For any failed storage device in a eX5 based node. 2015 96 .Technical Documentation 9.9. Due to the new storage layout for X6-only installations. MT 3837 and 6241). MT 7143 and 7147) and X6 based servers (Intel Ivybridge. secondary site: 6 eX5 & 1 X6 While both sites are mixed. the only supported options are either to increase the number of eX5 server so that they are still the majority or to switch to a pure X6 cluster which requires a reinstallation.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. • Primary site: 6 eX5. Future releases will level the differences. The number of X6 server must always be less than the number of eX5 nodes. secondary site: 6 eX5 servers This is not allowed as on the first site the eX5 servers are not the majority. These combinations are not allowed: • Primary site: 3 ex5 & 3 X6.9. For DR-clusters we require that both sites (primary & secondary) must consist only of eX5 server or only of X6 servers or of a mix of eX5 and X6 server where the eX5 servers have the majority on each site. Besides the different storage layout. • Primary site: 4 eX5 & 3 X6. These will be explained below. and Intel Haswell CPUs must not be installed! 9.2.1 Definition & Overview A mixed eX5/X6 DR cluster is a Lenovo Solution DR-enabled cluster consisting of eX5 based servers (Intel Westmere. but in each site the eX5 are the majority. • Primary site: 4 eX5 & 3 X6.2. but the sites differ in size.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 DR cluster is limited by the number of eX5 servers within that cluster. 2015 97 .Technical Documentation 9. If you plan to use more X6 servers in a cluster. Hybrid clusters with a mix of Intel Westmere. an X6 configuration must be slightly modified before an X6 node can be added to an eX5 cluster.2.2. 9. For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement: X6 Implementation Guide 1. secondary site: 6 X6 servers This is allowed as no site is mixed. and Intel Ivy Bridge CPUs. For example these combinations are allowed: • Primary site: 6 eX5.2 Prerequisites & Limitations 9. secondary site: 4 eX5 & 2 X6 servers This is allowed as the first site is not mixed and the eX5 have the majority on the secondary site. secondary site: 6 eX5 servers The eX5 servers are the majority on both sites. there are some minor configuration changes between the older Westmere appliance releases and the first X6 appliance versions. Such an X6 node is considered to be configured in legacy or compatibility mode.2 Mixed eX5/X6 DR Clusters Attention This chapter only applies to hybrid clusters consisting of servers with Intel Westmere. Another term used is "hybrid DR cluster". 2.6. 512GB RAM) AC48S1024C (8 CPUs.60-7 or later for the eX5 servers. storage is divided into two GPFS X6 Implementation Guide 1.60-7 and later contain a helper script for calculating the necessary file system quotas. Contact IBM support to obtain this eFix.→ system2 ext2 "" 1675 3350 For SSD/S sized clusters this is not necessary.Technical Documentation eX5 T-Shirt Size SSD (x3690.0-19) or GPFS 4.2.0. This means the script is only called once.1. 2015 98 .2. Do not execute the script with the Cluster (Master) option. Follow the Implementation Guide until (including) the call of the script saphana-setup-saphana. For the installation of the X6 server. Generation 2) M (x3950.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level. 7143-H3X or 7143-HBX) X6 Server Model AC32S256C (2 CPUs. For S/SSD model based cluster no change is needed as these models use only one GPFS storage pool like the new X6 models. Appliance versions 1.5. 7147-H3X. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance version. Since Appliance version 1. For eX5 servers. After finishing the base installation in phase 2.3.2 Prerequisites Before deploying any X6 server to an eX5 cluster.2.9.0-17 without eFix 8! It is required to use only eX5 servers installed with appliance version 1. 7143-H2X or 7143-HBX) L (x3950. login to the server and run 1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←. When installing a new cluster please use appliance version 1. 9. 256GB RAM) AC34S512C (4 CPUs.60-7 or later. Alternatively PTF 17 (3. the GPFS filesystem software on the eX5 servers must be updated to the same version installed on the X6 models.5. Please read these instructions before installing the new server and take care to implement them correctly. 9. please calculate the quotas manually following the instructions in the appendix of the eX5 Operations Guide.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The minimum supported GPFS versions for hybrid DR clusters are GPFS 3. please use the installation description in Lenovo eX5 Systems Solution for SAP HANA Implementation Guide. the GPFS stanza files need to be adapted to the older eX5 storage layout.70-9 an updated quota calculation help script is installed which can detect a hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.6. which introduced RAID5 in cluster configurations. Do not use plain 3. If this script is not available.6. 9. In clusters based on x3950 models. The RAID5 setup is perceived as being more reliable and convenient than the previously used RAID0 configuration. please use the Lenovo X6 Systems Solution for SAP HANA Implementation Guide for System x X6 Servers and read the instructions below.2.2 Adapting the GPFS stanza file After configuring the base system and the subsequent reboot in phase 2 of the installation.8) which may require an update even on the X6 nodes.3 New Installation In general.3. the installation and operation instructions for eX5 and X6-based servers remain valid. Generation 1) S (x3690.1 PTF 8 (4.5. 1024GB RAM) Table 41: eX5 T-Shirt Size to X6 Model Mapping 9.0-17) with eFix 8 can be used. 7147-HBX.sh with the Cluster (Worker) option.5 PTF 19 (3.7. sh. config. the two fileset setup is used on all nodes. servers and failureGroup to their correct values.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation of HANA data volumes and log files.70-8.data.7. This is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd RAID array in the external SAS enclosure (AC34S512C) resp. from any X6 node installed with appliance version 1. backups). trace. Do not run the quota calculator on any X6 node installed with appliance version 1. 9.list. 9.2. Follow the normal instructions given in the eX5 Operations Guide in chapter 4.7. When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster. please install the X6 nodes according to the X6 Implementation Guide.4 Existing Cluster Extension/Node Replacement When expanding a mixed cluster with additional eX5 servers. When using X6 servers in a eX5 cluster. please follow the instructions in the Disaster Recovery sections of the eX5 Implementation & Operations Guides. servers and failureGroup to their correct values. 2015 99 .data.5 Deviating Operation Instructions In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers. Each fileset is limited with a quota.7. Complete the installation as described in the chapter "Disaster Recovery" in the Implementation Guide for eX5. 9. X6 Implementation Guide 1.Technical Documentation storage pools. log files and the shared parts (like binaries.gpfsnode*) on the X6 nodes and change the usage and pool parameters as shown in table 43: Stanza file for X6 servers in eX5 clusters on page 101. After Phase 2 (the basic configuration) adapt the generated stanza file on each node before adding these node to the cluster.gpfsnode*) on all X6 nodes and change the usage and pool parameters as shown in table 42: Stanza file for X6 servers in eX5 clusters on page 100: Please set the nsd. Afterwards either run the quota calculation script from any eX5 nodes. The new X6 servers must provide these two storage pools in order to be compatible.2. Note In the DR solution only for the hanalog fileset a quota is set. SAP HANA must be installed into /sapmnt as described in the eX5 documentation. Please set the nsd.9.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP HANA.2.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The usage of this script is also documented in the quota chapter in the appendix.5.70-9 or later or do the manual calculation described in the appendix section of the eX5 Operations Guide.2 Adding a cluster node.2. Edit the stanza file (/var/mmfs/config/disk.5. in the upper storage book (AC48S1024C) to the storage pool hddpool. Edit the stanza file (/var/mmfs/config/disk.list. 9. X6 servers use three filesets for separating HANA data volumes. On any eX5 node and on X6 nodes with appliance version 1.73-9 you can use the quota calculation script saphana-quota-calculator. The quota calculation is explained in the appendix of the guide. so for the quotas the eX5 version of the Operations Guide is applicable. 4 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.Technical Documentation Model Generated File 1 2 AC32S256C (S/SSD) 3 4 5 6 %nsd: device=/dev/sdb nsd=data01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system Change To 1 2 3 4 5 6 1 2 3 1 2 3 4 5 AC34S512C (M) 6 7 8 9 10 11 12 %nsd: device=/dev/sdb 4 nsd=data01node04 5 servers=gpfsnode04 6 usage=dataAndMetadata 7 failureGroup=1004 8 pool=system 9 %nsd: device=/dev/sdc 10 nsd=data02node04 11 servers=gpfsnode04 12 usage=dataAndMetadata 13 failureGroup=1004 14 pool=system 15 16 17 18 1 2 3 1 2 3 4 5 6 7 8 AC48S1024C 9 (L) 10 11 12 13 14 15 16 17 18 %nsd: device=/dev/sdb 4 5 nsd=data01node04 6 servers=gpfsnode04 usage=dataAndMetadata 7 failureGroup=1004 8 9 pool=system %nsd: device=/dev/sdc 10 nsd=data02node04 11 12 servers=gpfsnode04 usage=dataAndMetadata 13 failureGroup=1004 14 15 pool=system %nsd: device=/dev/sdd 16 nsd=data03node04 17 servers=gpfsnode04 18 usage=dataAndMetadata 19 20 failureGroup=1004 21 pool=system 22 23 24 %nsd: device=/dev/sdb nsd=data01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1. 2015 100 .4 pool=hddpool Table 42: Stanza file for X6 servers in eX5 clusters X6 Implementation Guide 1.4 pool=hddpool %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.4 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1.4 pool=hddpool %nsd: device=/dev/sdd nsd=data02node04 servers=gpfsnode04 usage=dataOnly failureGroup=1.4 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1.0.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.4 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.0.0.0.0.4 pool=system %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.9.0.0.0. 9.0.4 pool=hddpool Table 43: Stanza file for X6 servers in eX5 clusters X6 Implementation Guide 1.Technical Documentation Model Generated File 1 2 AC32S256C (S/SSD) 3 4 5 6 %nsd: device=/dev/sdb nsd=data01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1004 pool=system Change To 1 2 3 4 5 6 1 2 3 1 2 3 4 5 AC34S512C (M) 6 7 8 9 10 11 12 %nsd: device=/dev/sdb 4 nsd=data01node04 5 servers=gpfsnode04 6 usage=dataAndMetadata 7 failureGroup=1004 8 pool=system 9 %nsd: device=/dev/sdc 10 nsd=data02node04 11 servers=gpfsnode04 12 usage=dataAndMetadata 13 failureGroup=1004 14 pool=system 15 16 17 18 1 2 3 1 2 3 4 5 6 7 8 AC48S1024C 9 (L) 10 11 12 13 14 15 16 17 18 %nsd: device=/dev/sdb 4 5 nsd=data01node04 6 servers=gpfsnode04 usage=dataAndMetadata 7 failureGroup=1004 8 9 pool=system %nsd: device=/dev/sdc 10 nsd=data02node04 11 12 servers=gpfsnode04 usage=dataAndMetadata 13 failureGroup=1004 14 15 pool=system %nsd: device=/dev/sdd 16 nsd=data03node04 17 servers=gpfsnode04 18 usage=dataAndMetadata 19 20 failureGroup=1004 21 pool=system 22 23 24 %nsd: device=/dev/sdb nsd=data01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.4 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1. 2015 101 .0.0.0.4 pool=hddpool %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.4 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1.4 pool=system %nsd: device=/dev/sdb1 nsd=MDdata01node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.4 pool=hddpool %nsd: device=/dev/sdd nsd=data02node04 servers=gpfsnode04 usage=dataOnly failureGroup=1.4 pool=system %nsd: device=/dev/sdb2 nsd=MDdata02node04 servers=gpfsnode04 usage=dataAndMetadata failureGroup=1.0.4 pool=system %nsd: device=/dev/sdc nsd=data01node04 servers=gpfsnode04 usage=dataOnly failureGroup=1.0.0.0.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 9. 2015 102 . Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.Technical Documentation 9. The only difference in handling is that the stanza files given in 9.2.5. the Implementation & Operation Guides for eX5 are fully applicable.2.3 Storage Device Failure For any failed storage device in a eX5 based node. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. For X6 based nodes please use the Operation Guide for X6.4: Existing Cluster Extension/Node Replacement on page 99 must be used. The described solution implements a simple 1U server as quorum node for IBM GPFS. In principle.9. Therefore. The third node can e. X6 Implementation Guide 1. the smallest possible setup needs to contain three nodes. be a plain Lenovo System x3550 M4 system. but does contribute to the IBM GFPS cluster.Single Node HA on page 104. this can be described as a cluster where only a single node is highly available. Two Lenovo Workload Optimized Systems for SAP HANA and one quorum node. since there is only one SAP HANA worker node. Figure 31: Single Node with High Availability on page 103 shows a high level overview of the system landscape with two SAP HANA appliances and an IBM GPFS Quorum node. The file system layout is shown in Figure 32: File System Layout .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. There is no distribution of information across the nodes as there is no secondary worker node attached.g.Technical Documentation 10 Special Single Node Installation Scenarios This section covers installations that consist of just one single node in production and need to have HA or DR features using SAP System Replication or IBM GPFS Storage replication. 2015 103 . Worker Node Standby Node Quorum Node GPFS Links SAP HANA Links Inter-Switch Link (ISL) G8264 switches Figure 31: Single Node with High Availability The major difference between a single node HA configuration and larger scale out clusters is the requirement to have a third node to build a quorum for the IBM GPFS file system. This node does not contribute to the file system with any data disks.1 Single Node with HA Installation with Side-car Quorum Solution A single node with high availability (HA) describes the smallest possible configuration for a highly available Lenovo solution for a SAP HANA system. 10. 2: Cluster Installation on page 64. this time only on the future master node. 1.1 Installation of SAP HANA appliance single node with HA To begin the installation. This does a basic installation as a cluster node.sh -H The switch -H prevents SAP HANA from being installed automatically. Then the quorum node will be manually installed and configured to include its own IBM GPFS NSD to the file system cluster. 2015 104 . Configure the network interfaces (internal and external) and the NTP server(s) as described there. Select this time Cluster (Master) .Single Node HA 10.2. X6 Implementation Guide 1. 2.1.2. Start the installer again as above with the option -H. 3.6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. This needs to be done manually later. 4. Start the text based installer as follows on each of the two nodes: 1 saphana-setup-saphana.6.Technical Documentation node1 node2 node3 Quorum Shared File System HDD Data second replica first replica HDD Data File System Descriptor FS Desc FS Desc meta data HDD Meta data Meta data LG1 FG1 LG1 FG2 sda1 OS sda2 sda1 sda2 FS Desc LG1 FG3 sda1 OS sda2 OS Figure 32: File System Layout . See again section 6. you need to install both Lenovo Workload Optimized Systems using the steps at the beginning of chapter 6: Guided Install of the Lenovo Solution on page 41. Refer to the steps as stated in section 6.9. Select Cluster (worker) .2: Cluster Installation on page 64 together with the steps described below. then open the YaST software panel and install the above pattern before installing and compiling GPFS.4GHz/1066MHz/10MB.1 Install the Operating System You may use SLES17 11 to install the OS on this machine using the default settings. See table 44 on page 105. Xeon 4C E5-2609 80W 2.2 Prepare quorum node The quorum node used can be. 2015 105 . It also contains an Emulex Virtual Fabric Adapter II with two 10 Gigabit Ethernet ports. If you install using the SLES for SAP Applications 17 SUSE Linux Enterprise Server X6 Implementation Guide 1.9. O/Bay 2. If you do not do this at install time.2. Bigger systems only require a larger cost for the GPFS license and are not needed.1.com/support/ entry/portal/docdisplay?lndocid=migr-5082165. While installing Linux.g.5in SFF 10K 6Gbps HS SAS HDD 81Y4481 ServeRAID M5110 SAS/SATA Controller for IBM System x Emulex Dual Port 10GbE SFP+ Embedded VFA III for IBM 90Y6456 System x ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM 81Y4487 System x Express IBM System x 550W High Efficiency Platinum AC Power 00D7087 Supply 00D8042 SLES X86 2 Socket Std SUSE Support 3Yr IBM GPFS for x86 Architecture. 2Rx4. 10A/100-250V. In order to install this driver at the same time. 1 6 3 1 1 1 1 1 28 1 1 1 1 2 Table 44: IBM System x3550 M4 GPFS quorum node 10.g. Rack 8GB (1x8GB.3m. Part Number System x3550 M4 GPFS quorum node x3550 M4. C13 to IEC 320-C14 Rack Power Cable Qty.5in & HS SAS/SATA. please select the pattern "C/C++ Compiler and Tools" in addition to the default selection of software.Technical Documentation 10. you must prepare a USB drive with the appropriate ServeRAID device update driver (dud) file that can be found on IBM FixCentral. 7914B2G 1X4GB. Note SLES 11 does not contain RAID drivers for the IBM ServeRAID M5110 RAID controller (see table 44). an Lenovo System x3550 M4 with a single CPU and three local disks configured in a RAID5 configuration.35V) PC3L-10600 CL9 ECC DDR3 49Y1397 1333MHz LP RDIMM 90Y8877 IBM 300GB 2. 550W p/s. SR M1115. Note We recommend to always use the latest version of SLES for the quorum node.1. Install it during the install by pressing <F6> during boot splash screen. GPFS Server Per 10 VUs w/1 68Y9124 Yr SW S&S 46M0902 IBM UltraSlim Enhanced SATA Multi-Burner 69Y5681 x3550 M4 ODD Cable 90Y3901 IBM Integrated Management Module Advanced Upgrade 91Y6450 3yr Essentials HW and SW Support 39Y7932 4. e. http://ibm.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. You can download the IBM ServeRAID drivers from IBM support sites: e. 1. We recommend the following server to be used as quorum nodes for the best price/performance of this node. Please refer to the driver README instructions for further details. 2 Disk partitioning The SLES 11 installation media will automatically partition your hard drive if you do not remove the boot option "autoyast=usb:///" completely. Therefore we do not recommend to use the SLES for SAP Applications 11 installation media for this server. is not compatible with the newer kernel delivered in the SLES for SAP Applications 11 installation media.g. Device /dev/sda1 /dev/sda2 /dev/sda3 Size rest 10GB 10GB Mount point / swap not mounted . 10.used for GPFS NSD Table 45: Single Node with HA OS Partitioning 10.9. but you will not be able to reboot the system as the device driver that was used to install.2.2.3 Firewall Disable the integrated firewall during the network configuration steps or else you won’t be able to connect to the server until the firewall has been configured correctly.1. 192. 10. you will be able to install with this dud file. This is not described in this document.1. We recommend to remove the boot option. X6 Implementation Guide 1. 2015 106 .3 Quorum Node Network Setup Follow information in table 46: Single Node with HA OS Networking Setup on page 106 to setup the networking during the SLES for SAP Applications OS installation. Although this is not dramatic.253) This is not needed as this node will not run SAP HANA. it would mean you would have to use a tool like gparted to resize the partitions in the following manner.Technical Documentation 11 DVD. Deviations are possible for the management. This may be turned on and configured according to the SAP HANA Security Guidelines.168.1. Table 46: Single Node with HA OS Networking Setup Figure 33 on page 107 shows the typical network setup for a single node with HA cluster.not formated . Network 10GbE port 0 10GbE port 1 bond0 Host Name GPFS IP address HANA IP Address Description Connect 10GigE port to the first G8264 switch Connect 10GigE port to the second G8264 switch Bond Port 0 and Port 1 together Set the Bonding options to: mode=4 xmit_hash_policy=layer3+4 gpfsnode99 Place at the end of the range (e. client access and ERP replication networks depending on the real customer requirements. "autoyast=usb:///" completely and manually configure the partitions as described in Section 45: Single Node with HA OS Partitioning on page 106.10.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 1 Switch configuration The network switches need to be configured in the standard scaleout configuration. Add entries that are missing like for instance external hostnames.253 10.1.168.4 Adapt hosts file The host file /etc/hosts on all three cluster nodes needs to have the following entries.168. described in section 5. 2015 107 .9. X6 Implementation Guide 1.3.1.1.168.10.102 192.101 192.10.Technical Documentation Legend SAP client 1GbE SAP HANA GPFS 10 GbE 10 GbE 10GbE Customer Customer Interface Zone Interface Zone Interface Inter Switch Links IMM 1 GbE 40 GbE Bonded Interface Optional Interface 0 6 8 1 GigE 10 GigE SAP SAPHANA HANASingle SingleNode Nodewith withHA HAAppliance Appliance IMM 1 0 Node1 IMM IMM 1 0 Node2 Quorum 0 1 Node 10 GbE Customer Switch Choice HANA 6 HANA 8 6 8 10GigE 1 System management SAP Business Suite 10GigE 2 GPFS 7 GPFS 9 7 GPFS 9 2 3 Customer Switch Choice 3 2 5 4 3 10 11 2 5 4 10 11 Figure 33: Network Switch Setup for Single Node with HA 10.LACP key PVID G8264 Switch #1 22 1002 101 G8264 Switch #2 22 1002 101 Table 47: Single Node with HA Network Switch Definitions 10. Change the IP addresses to the ones used in your scenario. The ports of the new network links need to be added to the correct VLANs and the vLAG and LACP settings need to be made.5 gpfsnode01 gpfsnode01 gpfsnode02 gpfsnode02 gpfsnode99 gpfsnode99 SSH configuration The ssh configuration also needs to be extended to the third node. Each node needs to have the public ssh keys of each other node so that the communication between the GPFS nodes is guaranteed.10. The 10GigE connections of the additional quorum node will be configured as an extension to the existing vLAG configuration.6. Description ports vLAG .7: Network Configurations in a Clustered Environment on page 30. 1 2 3 192.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 1. It does not have any data or metadata.tar.6 Quorum Node IBM GPFS setup Update the file /var/mmfs/config/nodes.gz.ssh/id_rsa -N '' The key needs to be copied to all cluster nodes.1/GPFS_4.5.1 This should give you the base installer archive GPFS_4.1. Run the following command on quorum node for each host.tar.1/GPFS-4.1* /var/tmp/install/gpfs-4.7 Quorum Node IBM GPFS installation Perform the following commands as user root. The number of copies of the file system descriptor depends on the number of disk in different failure groups.1-x86_64-Linux. --text-only Accept the license by pressing "1". 10.cluster on the first node (gpfsnode01) to the following content. For a two node HA cluster it is therefore necessary to also have a copy of the descriptor on the quorum node.Technical Documentation 10.1_STD_LSX_QSG.1 scp gpfsnode01:/var/tmp/install/gpfs-4.0-0_x86_64 --dir .standard.gz tar xvf GPFS-4. Extract the IBM GPFS archives and start the installer: 1 2 3 4 cd /var/tmp/install/gpfs-4.1 scp gpfsnode01:/var/tmp/install/gpfs-4.tar. 2015 108 .1. Copy the GPFS installer files from the master node: 1 2 3 mkdir -p /var/tmp/install/gpfs-4.tar.0.gz .1 tar xvf GPFS_4. 1 2 ssh-copy-id gpfsnode01 ssh-copy-id gpfsnode02 Run the following command on each of the first two nodes with the GPFS private network hostname of the new quorum node: 1 ssh-copy-id gpfsnode99 10.gz and the PTF GPFS-4./gpfs_install-4. as it may be needed later: 1 2 3 gpfsnode01:quorum gpfsnode02:quorum gpfsnode99:quorum Besides the necessary number of quorum nodes it is also required to have a quorum on the file system descriptor.1. 0.1_STD_LSX_QSG.1. A disk needs to made available to GPFS on the additional quorum node which will only hold a copy of the file system descriptor.standard.1. Then install the RPMs: X6 Implementation Guide 1.<PTF>-x86_64-Linux. To maintain file system operations GPFS requires a quorum of the majority of the replicas of the file system descriptor.1* /var/tmp/install/gpfs-4.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9.1 Generate the ssh key on the quorum node the set of ssh keys on quorum node 1 Run the following command to generate ssh-keygen -t rsa -f ~/. rpm gpfs.1.noarch.en_US-${gpfs_release}-${gpfs_update_fixpack}.d/saphana-profile.rpm gpfs.x86_64.noarch.base-${gpfs_release}-0.rpm gpfs.2 Change SUSE Linux local settings 1.base-*.9.gpl-${gpfs_release}-0.d/saphana-profile. you may build the IBM GPFS libraries as follows: 1 2 3 4 cd /usr/lpp/mmfs/src make Autoconfig make World make InstallImages 10.en_US-${gpfs_release}-0.-f3 | cut -d.x86_64.rpm gpfs. Activate the new PATH variable 1 source /etc/profile.rpm | cut -d.sh 3.rpm rpm rpm rpm rpm rpm -Uvh -Uvh -Uvh -Uvh -Uvh gpfs.rpm | cut -d.gskit-*.base-*.noarch.Technical Documentation 1 2 gpfs_release=$(ls gpfs.msg.-f2) gpfs_update_fixpack=$(ls gpfs.x86_64.rpm gpfs.noarch.rpm gpfs.rpm gpfs.update.7.noarch. Create /etc/profile. Change file permissions: 1 chmod 644 /etc/profile.sh.gpl-${gpfs_release}-${gpfs_update_fixpack}.1/ 10.sh 4.docs-${gpfs_release}-${gpfs_update_fixpack}.ext-${gpfs_release}-${gpfs_update_fixpack}.update.update.1.x86_64.x86_64.1 Build the IBM GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/ src/README.docs-${gpfs_release}-0. 2015 109 .rpm gpfs.7.d/saphana-profile.noarch.rpm 10 11 12 13 14 15 Copy the license: 1 2 mkdir -p /usr/lpp/mmfs/4.rpm gpfs.msg. Create a configuration-directory for IBM GPFS 1 mkdir /var/mmfs/config X6 Implementation Guide 1.x86_64.base-${gpfs_release}-${gpfs_update_fixpack}. -f1) 3 4 5 6 7 8 9 rpm rpm rpm rpm rpm rpm -ivh -ivh -ivh -ivh -ivh -ivh gpfs.ext-${gpfs_release}-0.x86_64.1/ cp -pr license /usr/lpp/mmfs/4. In general.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 1 PATH=$PATH:/usr/lpp/mmfs/bin 2. Create a dump-directory for IBM GPFS 1 mkdir /tmp/GPFSdump 5. 1.gpfsnode99 -v no 10. 1 mmaddnode gpfsnode99 2.96-13 IP address Admin node name Designation Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.list.gpfsnode01 /usr/bin/ssh /usr/bin/scp 8 9 10 11 12 GPFS cluster configuration servers: ----------------------------------Primary server: gpfsnode01 Secondary server: gpfsnode02 13 14 Node Daemon node name X6 Implementation Guide 1.9 Create descriptor disk Create a disk descriptor file in the configuration directory of the quorum node /var/mmfs/config/disk.Technical Documentation 10. Mark backup and quorum node as quorum nodes for the cluster 1 mmchnode --quorum -N gpfsnode02.1. The output should look similar to this: 1 2 3 4 5 6 7 GPFS cluster information ======================== GPFS cluster name: GPFS cluster id: GPFS UID domain: Remote shell command: Remote file copy command: HANAcluster.gpfsnode99 4.9.8 Add quorum node Execute the next commands on the primary node: 1. Add the additional node to the cluster.1.list.gpfsnode99 -v no 10.quorum.quorum. Start IBM GPFS on the new node: 1 mmstartup 10.10 Add disk to file system After creating the NSD the disk needs to be added to the file system by running the mmadddisk command: 1 mmadddisk sapmntdata -F /var/mmfs/config/disk. list.gpfsnode99.1.quorum. 2015 110 . It should contain the following line which defines the disk partition on the quorum node as an NSD with the explicit function to hold the file system descriptor: 1 /dev/sda3:gpfsnode99::descOnly:1099:quorum01node99 Create the NSD by running the mmcrnsd command on the quorum node: 1 mmcrnsd -F /var/mmfs/config/disk. Mark new node as correct licensed 1 mmchlicense server --accept -N gpfsnode99 3.gpfsnode01 12394192078945061775 HANAcluster.11 Verify Cluster Setup Execute the command mmlsclusteron one of the cluster nodes. sap. ready to take over operation if the primary node experiences any failure.1.12 failure group ------1001 1001 1002 1002 1003 holds metadata -------yes yes yes yes no holds data ----yes yes yes yes no status -----ready ready ready ready ready storage availability -----------up up up up up disk id ---1 2 3 4 5 pool remarks -----.-----. We understand. There are 2 devices on each of the NSD server and none on the quorum node.Technical Documentation 15 16 17 18 --------------------------------------------------------------------1 gpfsnode01 192.1. Because of the importance of the quorum node it is recommended to place it at a third site.-------system desc system system desc system system desc Installation of SAP HANA Please refer to the official SAP documentation available here: http://help.10. The secondary node is running in hot-standby. 1 2 3 4 5 6 7 8 9 10 11 disk driver sector name type size -------------. Examples are a different fire compartment zone or the other end of the campus.168. that this is not always feasible. There is one active SAP HANA instance running on the primary node and database data gets replicated by IBM GPFS to the secondary node.102 gpfsnode02 quorum 3 gpfsnode99 192. This ensures that IBM GPFS on the primary site node stays up and running even if the link to the DR-site node gets interrupted. The listing of the command mmlsdisksapmntdata-L shows that there is one disk per failure group which contains a file system descriptor. Depending on distances it can also be on a different campus in the same city.-----data01node01 nsd 512 data02node01 nsd 512 data01node02 nsd 512 data02node02 nsd 512 quorum01node99 nsd 512 Number of quorum disks: 3 Read quorum value: 2 Write quorum value: 2 10. 10. In the first figure 34: Single Node with stretched HA .11. No non-production SAP HANA instance is allowed to run in this scenario.10.253 gpfsnode99 quorum 10. This leads to the following two designs.Two Site Approach on page 112 the quorum node is placed at the primary site. It can be applied to any SAP HANA configuration size.10.2 Single Node with stretched HA Installation This solution is designed to provide improved high-availability capabilities for a single node SAP HANA installation.9. 2015 111 .168.101 gpfsnode01 quorum 2 gpfsnode02 192. however. X6 Implementation Guide 1. In such a 1+1 stretched HA scenario the secondary node usually is distant to the primary node.com/hana_appliance. This ensures that a quorum may be reached if a node fails.1 List the IBM GPFS Disks Check the disks in the cluster.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.168. The location of the SAP HANA installation files is /var/tmp/saphana. depending on the client’s environment (in conventional 1+1 HA scenarios there is only one IBM-provided switch between the hops).2.1 Installation and configuration of SLES and IBM GPFS This scenario must be installed like a conventional 1+1 HA scenario as shown above in 10.Three Site Approach 10.Technical Documentation Site B Worker Node Quorum Node Standby Node GPFS Links SAP HANA Links Inter-Switch Link (ISL) G8264 switches Figure 34: Single Node with stretched HA . The major difference is the network setup. 2015 112 .1. Ensure that the separation of X6 Implementation Guide 1.9. clients have different types of links spanning the two sites and they use different network equipment technologies. switches) on the secondary site. Usually. The network architecture can be seen in figure 35: Single Node with stretched HA .Three Site Approach on page 112.e. Site C Site B Quorum Node Worker Node GPFS Links Standby Node G8264 switches SAP HANA Links Inter-Switch Link (ISL) Figure 35: Single Node with stretched HA .Two Site Approach The second approach places the quorum node at a third site. The client is allowed to use his own network equipment (i.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1: Installation of SAP HANA appliance single node with HA on page 104. It can be either routed or switched. node1 node2 node3 Quorum Shared File System HDD Data second replica first replica HDD Data File System Descriptor FS Desc FS Desc meta data HDD Meta data Meta data LG1 FG1 LG1 FG2 sda1 sda2 sda1 OS OS sda2 FS Desc LG1 FG3 sda1 sda2 OS Figure 36: File System Layout . This leads to the following two designs. Because of the importance of the quorum node it is recommended to place it at a third site. This is to guarantee high-availability of the solution. The file system layout is shown in Figure 36: File System Layout . Otherwise.Two Site Approach on page 114 the quorum node is 18 Virtual Local Area Network X6 Implementation Guide 1.Technical Documentation network interfaces is kept across both nodes (distinct switches or VLAN18 s for each IBM GPFS and HANA network port per node).com/hana_appliance.Single Node stretched HA on page 113.3 Single Node with DR Installation This solution is designed to provide disaster recovery capabilities for a single node SAP HANA installation.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. It can be applied to any SAP HANA machine size. In the first figure 37: Single Node with Disaster Recovery .2 Installation of SAP HANA Please refer to the official SAP documentation available here: http://help.9. 2015 113 . 10. however. We understand. the two setups are identical.sap. The location of the SAP HANA installation files is /var/tmp/saphana.2. There is one active SAP HANA instance running on the primary site node and a standby node on the backup site is ready to take over operation in case of a disaster. The setup of this solution is a manual process after SLES has been installed. that this is not always feasible.Single Node stretched HA 10. The difference between a single node with stretched HA and a single node with DR installation is the fact that automatic failover is sacrificed for the possibility to run a non-production SAP HANA instance on the DR-site node. 96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Two Site Approach The second approach places the quorum node at a third site.9. Site C Site B Quorum Node Worker Node GPFS Links Standby Node G8264 switches SAP HANA Links Inter-Switch Link (ISL) Figure 38: Single Node with Disaster Recovery . This ensures that IBM GPFS on the primary site node stays up and running even if the link to the DR-site node gets interrupted.Three Site Approach on page 114. Site B Storage expansion for non-prod DB instance Worker Node Quorum Node DR Node GPFS Links SAP HANA Links Inter-Switch Link (ISL) G8264 switches Figure 37: Single Node with Disaster Recovery . The network architecture can be seen in figure 38: Single Node with Disaster Recovery . 2015 114 .Three Site Approach X6 Implementation Guide 1.Technical Documentation placed at the primary site. 1 Installation and configuration of SLES and IBM GPFS This scenario has to be installed in the exact same way as described in 10.Single Node with DR with Storage Expansion on page 115. The difference is in the configuration of SAP HANA.1. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance on page 126 to setup the additional disk drives. High availability concepts ensure that the database stays up if the primary node has an issue..9. Disaster recovery concepts ensure that the database stays up if the first two SAP HANA nodes (residing in the primary customer data center) X6 Implementation Guide 1.3. It can be applied to any SAP HANA configuration size.4 Single Node with HA and DR Installation This solution is designed to provide the maximum level of redundancy for a single node SAP HANA installation. IBM GPFS replicates data to the backup site node.3. Data File System Descriptor FS Desc FS Desc meta data HDD Meta data Meta data FS Desc LG1 LG1 LG1 FG1 FG2 FG3 sda1 sda2 OS sda1 sda2 OS sda1 sda2 OS M5120 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The overall file system architecture is illustrated in figure 39: File System Layout .2 Optional: Expansion Storage Setup for Non-Production Instance node1 node2 HDD Shared File System HDD node3 Quorum Data second replica first replica This solution supports the additional use of the DR-site node to host a non-production SAP HANA instance..1: Installation of SAP HANA appliance single node with HA on page 104.Technical Documentation 10. 10.Single Node with DR with Storage Expansion 10. Second file system for non-prod Figure 39: File System Layout . 2015 115 . Figure 40: Single Node with HADR using IBM GPFS Storage Replication on page 116 illustrates the overall architecture of the solution. 2015 116 . Site B Storage expansion for non-prod DB instance Worker Node Standby Node GPFS Links DR Node G8264 switches SAP HANA Links Inter-Switch Link (ISL) Figure 40: Single Node with HADR using IBM GPFS Storage Replication 10.Technical Documentation become unavailable. The procedure is similar as described in Installation of SAP HANA appliance single node with HA. X6 Implementation Guide 1. The final file system layout is shown in figure 41 on page 117.4.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9.1 Installation and configuration of SLES and IBM GPFS Install the latest supported IBM Systems Solution for SAP HANA on all three nodes by using the latest supported SLES for SAP Applications DVD and the latest non-OS component DVD. Select this time Cluster (Master) . This does a basic installation as a cluster node. this time only on the future master node. 3. Accept the IBM GPFS license and wait for the installation process to continue successfully. number of standby nodes 1 (this does not matter. Change the replication level for the IBM GPFS fileset: X6 Implementation Guide 1. Make sure that all three SAP HANA nodes can ping each other on all interfaces.6. 1.9.2: Cluster Installation on page 64 together with the steps described below.Single Node HADR To begin the installation. Assure that the IP addresses for the IBM GPFS and HANA network are correct. Enter number of nodes 3. Start the text based installer as follows on each of the two nodes: 1 saphana-setup-saphana.6. Start the installer again as above with the option -H.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 117 . Configure the network interfaces (internal and external) and the NTP server(s) as described there. 2. Enter details for SID.2.Technical Documentation node1 node2 node3 Shared File System HDD HDD Data second replica first replica HDD meta data File System Descriptor third replica Data Data FS Desc FS Desc FS Desc Meta data Meta data Meta data LG1 LG1 LG1 FG1 FG2 FG3 sda1 OS sda2 sda1 sda2 sda1 OS sda2 OS Figure 41: File System Layout . 4. Refer to the steps as stated in section 6. it would be used only for HANA which is not installed automatically anyway). The IP addresses can be in different subnets as long as proper routing between the subnets is in place. Select Cluster (worker) . This needs to be done manually later. you need to install both IBM Workload Optimized Systems using the steps at the beginning of chapter 6: Guided Install of the Lenovo Solution on page 41.2. See again section 6.2: Cluster Installation on page 64. Instance ID and a HANA password.sh -H The switch -H prevents SAP HANA from being installed automatically. 1 mmsetquota -j hanalog -h 1024G -s 1024G /dev/sapmntdata The data quota for this HADR scenario is set to 9 * RAM. In case of a 1 TB server this means a quota of 9 TB..4.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 10.. Follow instructions in 10. The overall file system architecture is illustrated in figure 42: File System Layout .7: Expansion Storage Setup for Non-productive SAP HANA Instance on page 126 to setup the additional disk drives. X6 Implementation Guide 1. 1 mmsetquota -j hanadata -h 9216G -s 9216G /dev/sapmntdata Allocate the remaining space to HANA shared and execute mmsetquote accordingly. 2015 118 . 1 mmsetquota -j hanashared -h <REMAINING>G -s <REMAINING>G /dev/sapmntdata 9.9.Single Node HADR with Storage Expansion on page 119.2 Optional: Expansion Storage Setup for Non-Production Instance This solution supports the additional use of the DR-site node to host a non-production SAP HANA instance. Adjust the quotas on the file system. The log quota is set to 1 TB regardless of memory size. -m 3 -M 3 -r 3 -R 3 .4. Restripe the data on the IBM GPFS filesystem to all have the required three replicas: 1 mmrestripefs sapmntdata -R 7..5: SAP HANA appliance installation on page 79. Check the replication level set: 1 2 3 4 5 6 7 mmlsfs sapmntdata . Default Maximum Default Maximum number number number number of of of of metadata replicas metadata replicas data replicas data replicas 6. Install SAP HANA similarly as described in section 8.Technical Documentation 1 mmchfs sapmntdata -m 3 -r 3 5. Set the following IBM GPFS configuration parameters: 1 2 mmchconfig unmountOnDiskFail=meta mmchconfig panicOnDiskFail=meta 8.. As the IBM GPFS interfaces on the DR-site node are not connected to the primary site a set of redundant switches is optional..Single Node HADR with Storage Expansion 10.com/hana.9. This leads to one architecture with switches and one architecture without switches between the SAP HANA nodes. Figure 43: Single Node DR with SAP System Replication on page 120 shows the solution with switches.Technical Documentation node1 node2 node3 Shared File System HDD HDD Data second replica first replica HDD third replica Data File System Descriptor FS Desc FS Desc FS Desc meta data Data Meta data Meta data Meta data LG1 LG1 LG1 FG1 FG2 FG3 sda1 OS sda2 sda1 OS sda2 sda1 sda2 OS M5120 . see official SAP HANA documentation on http://help.sap. It can be applied to any SAP HANA configuration size.5 Single Node DR Installation with SAP HANA System Replication This solution provides redundancy at the application layer. 2015 119 . There are two ways how to design the network for such a DR solution based on System Replication. For details. X6 Implementation Guide 1.. Figure 42: File System Layout .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The final file system layout can be seen in figure 45: File System Layout of Single Node DR with SAP System Replication on page 121.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 120 .5.9. In this case you have to connect the two 10 Gbit interfaces used for SAP HANA communication on the two nodes directly without an intermediate switch. This architecture is illustrated in figure 44: Single Node DR with SAP System Replication on page 120. as far as SLES and IBM GPFS are concerned.Technical Documentation Site B Storage expansion for non-prod DB instance Worker Node DR Node SAP HANA Links Inter-Switch Link (ISL) G8264 switches Figure 43: Single Node DR with SAP System Replication Because the two SAP HANA nodes do not use their IBM GPFS network interfaces you can also opt for a solution without intermediate network switches.1 Installation and configuration of SLES and IBM GPFS Each site is considered to be a single node. X6 Implementation Guide 1. Site B Storage expansion for non-prod DB instance Worker Node DR Node SAP HANA Links Figure 44: Single Node DR with SAP System Replication 10. 5.6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.3 Optional: Expansion Storage Setup for Non-Production Instance This setup supports the additional use of the DR-site node to host a non-production SAP HANA instance.com/hana_appliance. Data replication will be taken care of by SAP HANA application level. 10. This needs to be done manually later.2.5.Technical Documentation SAP HANA System Replication node2 File system A File system B HDD HDD Data first replica File System Descriptor FS Desc File System Descriptor FS Desc meta data Meta data meta data first replica node1 Meta data Data LG1 LG1 FG1 FG1 sda1 sda2 sda1 sda2 OS OS GPFS Cluster A GPFS Cluster B Figure 45: File System Layout of Single Node DR with SAP System Replication Perform a single node installation on both nodes as described in 6. 10.1: Single Node Installation on page 63 but start the installer with the -H option: 1 saphana-setup-saphana. Replication can happen synchronously or asynchronously. The layout of the two file systems (production and non-production) is illustrated in figure 46: File System Layout of Single Node DR with SAP System Replication with Storage Expansion on page 122. The location of the SAP HANA installation files is /var/tmp/saphana.2 Installation of SAP HANA Please refer to the official SAP documentation available here: http://help. The switch -H prevents HANA from being installed automatically.9. X6 Implementation Guide 1.sap. Configure the network connection for SAP HANA and ensure the connectivity.sh -H In the option list select Single Node . 2015 121 . SAP HANA System Replication transfers data to a DR node on a remote site.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. This file system will host the data of the non-production SAP HANA instance.6 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication This approach also provides maximum redundancy for single node SAP HANA installations.. For HA (1+1) it uses IBM GPFS storage replication. GPFS Cluster B Figure 46: File System Layout of Single Node DR with SAP System Replication with Storage Expansion On the remote site node (receiving the replication data from primary SAP HANA instance) you will have two file systems configured.. Follow instructions in 10. the active and the standby node are in the same IBM GPFS cluster and have access to the system file system. In addition to that. In case of a disaster in the primary site data center the DR node can be used to host SAP HANA.9. Whenever the active node writes data to disk IBM GPFS replicates it to the standby node. This file system will host the replicated data coming in from the active production SAP HANA instance. SAP X6 Implementation Guide 1. To achieve this. The primary file systems spans local disks only and is to be configured in the exact same way as the primary site file system. We use the term 1+1/1 to describe this style of single node installation. 1+1/1 uses the IBM GPFS storage replication feature and SAP HANA System Replication feature. The second file system only consists of storage expansion box drives attached to the remote site node. 10. 2015 122 .7: Expansion Storage Setup for Non-productive SAP HANA Instance on page 126 to setup these additional disk drives.Technical Documentation HDD Data first replica File system B HDD FS Desc File System Descriptor File system A FS Desc Meta data meta data node2 File System Descriptor node1 meta data first replica SAP HANA System Replication Meta data Data LG1 LG1 FG1 FG1 sda1 sda2 OS sda1 sda2 OS M5120 GPFS Cluster A . It can be applied to any SAP HANA configuration size. This leads to two possible network architectures. This is illustrated in figure 48: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication without remote site Switches on page 124. Figure 47: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication on page 123 shows this design. 2015 123 . There is no logical connection to the primary site IBM GPFS cluster.Technical Documentation HANA System Replication can either run in synchronous or in asynchronous replication mode.9. The DR node creates a separate IBM GPFS cluster consisting just of itself. the IBM GPFS network adapter on the DR node is to be left unconnected. Quorum Node Site B Storage expansion for non-prod DB instance Worker Node Standby Node GPFS Links DR Node G8264 switches SAP HANA Links Inter-Switch Link (ISL) Figure 47: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication The second architecture drops the switches on the DR site and instead connects the only required network interfaces (the 10 Gbit connection for SAP HANA communication) directly to the primary site switches. As a consequence. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. It has its own file system on local disk. The first one provides redundant switches on both sites. There is one IBM GPFS cluster and one file system spanning both nodes with IBM GPFS taking care of replicating the data to the standby node (r=2.1 Installation and configuration of SLES and IBM GPFS The two nodes on the primary site are to be installed in the exact same way as a 1+1 HA environment described in 10. The final file system layout is shown in figure 49: File System of Single Node with HA and DR with System Replication on page 125 and it illustrates the use of the two technologies.1: Installation of SAP HANA appliance single node with HA on page 104.Technical Documentation Quorum Node Site B Storage expansion for non-prod DB instance Worker Node Standby Node GPFS Links DR Node G8264 switches SAP HANA Links Inter-Switch Link (ISL) Figure 48: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication without remote site Switches 10. IBM GPFS storage replication and SAP HANA system replication.6. X6 Implementation Guide 1. Please refer to 10. 2015 124 . The OS and IBM GPFS have no logical dependency on the primary site node.9.5: Single Node DR Installation with SAP HANA System Replication on page 119 for details. m=2).96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. This will be achieved on application level with SAP HANA in the next step. To install the DR node follow all steps of a standard SAP HANA single node installation apart from installing SAP HANA itself (use the -H option).1. 6. This includes installing all components of SAP HANA and making sure that it runs self contained.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 125 . one in each site.Technical Documentation SAP HANA System Replication node1 node2 node3 Quorum node3 Shared File System A File system B HDD first replica HDD Data second replica Data Data FS Desc Meta data Meta data LG1 LG1 FG1 FG2 sda1 sda2 OS sda1 OS GPFS Cluster A sda2 FS Desc File System Descriptor File System Descriptor FS Desc meta data HDD FS Desc meta data first replica HDD Meta data LG1 LG1 FG3 sda1 sda2 FG1 sda1 OS sda2 OS GPFS Cluster B Figure 49: File System of Single Node with HA and DR with System Replication 10. You then have to follow official SAP HANA documentation to enable SAP HANA System Replication between the instance on the primary site node and the instance on the DR node. For the primary site please follow the according steps for a clustered HA installation.2 Installation of SAP HANA Install two separate instances of SAP HANA. X6 Implementation Guide 1.9. On the DR node you have to follow all steps of a standard SAP HANA single node installation. Technical Documentation SAP HANA System Replication node1 node2 node3 Quorum node3 Shared File System A File system B HDD first replica HDD Data second replica Data Data FS Desc Meta data Meta data LG1 LG1 FG1 FG2 sda1 sda2 OS sda1 OS sda2 FS Desc File System Descriptor File System Descriptor FS Desc meta data HDD FS Desc meta data first replica HDD Meta data LG1 LG1 FG3 sda1 sda2 FG1 sda1 OS sda2 OS M5120 GPFS Cluster A ... GPFS Cluster B Figure 50: File System of Single Node with HA and DR with System Replication and Storage Expansion 10.7 Expansion Storage Setup for Non-productive SAP HANA Instance This sections describes how to setup the disks in an expansion storage that hosts a non-productive SAP HANA instance. Expansions storage is supported in environments where the nodes at a DR site would be idle otherwise. Depending on the memory size of the nodes you have a different number of drives in the expansions. Create as many (8+p) RAID 5 arrays as possible. Declare remaining drives as hot spare. For details on how to use the RAID configuration utility see 6.2.1: Storage Configuration – RAID Setup on page 48. Each RAID 5 device will be given to IBM GPFS as an NSD. Collect the device names of all newly created virtual drives. Then create NSDs on them according to the following rules: 1. all NSDs will be dataAndMetadata 2. all NSDs go into the system pool 3. naming scheme is extXXnodeYY with XX being the two-digit drive number and YY the node number 4. one single failure group for all expansion box drives, make sure it is unique within you cluster Store a disk descriptor file similar to the following as /tmp/nsdlistexp.txt: 1 2 3 4 5 6 7 8 9 %nsd: device=/dev/sdd nsd=ext01node02 servers=gpfsnode02 usage=dataAndMetadata failureGroup=2 pool=system %nsd: device=/dev/sde nsd=ext02node02 servers=gpfsnode02 X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 126 Technical Documentation 10 11 12 13 14 15 16 17 18 19 20 usage=dataAndMetadata failureGroup=2 pool=system %pool: pool=system blockSize=1M usage=dataAndMetadata layoutMap=cluster allowWriteAffinity=yes writeAffinityDepth=1 blockGroupFactor=1 Create NSDs 1 # mmcrnsd -F /tmp/nsdlistexp.txt Create the file system 1 # mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 1M -N 3000000 -v no -m 1 -M ←,→2 -r 1 -R 2 -s failureGroupRoundRobin -T /sapmntext Mount the file system on the backup site node 1 # mmmount sapmntext If your client has a storage expansion connected to both nodes, primary site and backup site, then you need to apply above procedure two times, one for each node. Each expansion box file system is to be handled separately. Do not create a single file system that spans over both expansion box disks! This scenario is used if both data centers – thus both nodes – are to be considered equal and you want to be able to run production SAP HANA out of both data centers. In this case non-production SAP HANA instances must also be able to run on both nodes hence the need for a dedicated /sapmntext file system on both sides. X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 127 Technical Documentation 11 Virtualization The Lenovo Solution can be installed inside of a VMware virtual machine starting with Support Packackage Stack (SPS) 05. Currently SAP supports following virtualization solutions: • VMware vSphere 5.1 and SAP HANA SPS05 (or later releases) for non-production use cases • VMware vSphere 5.5 and SAP HANA SPS07 (or later releases) for production and non-production use cases. For non-production use multiple virtual machines may be deployed. For production use only single node installations are supported. See SAP Note 1788665 – SAP HANA Support for VMware Virtualized Environments. For VMware vSphere configuration please see SAP Note 1122388 – Linux: VMware vSphere configuration Guidelines Attention For Lenovo Servers with Intel Haswell EX Processor the minimum supported Version of VMware vSphere is 5.5U2. The sizing of a virtual machine has to be done according to the existing SAP HANA sizing guidelines for single node installations. The CPU/RAM ratio has to be met. In general SAP HANA virtualized with VMware vSphere is sized the same as non-virtualized SAP HANA deployments. In other words, for sizing the virtual machine (VM) the CPU/memory ratio as used for bare-metal sizing is taken into account to ensure locality of memory access on the underlying hardware resources. Lenovo Name vCPUs VM1 VM2 VM3 VM4 VM5 VM6 10 20 30 40 50 60 Virtual memory (GB) 64 128 192 256 320 384 Ratio 1 2 3 4 5 6 Total HDD for OS Total HDD for GPFS 128 128 128 128 128 128 416 736 1056 1376 1696 2016 Table 48: SAP HANA Virtual Machine Sizes by Lenovo This document covers the installation of one VM on 2 or 4 socket System x3850 X6 Workload Optimized solutions. The installation on 8 socket System x3950 X6 systems is not supported. For installation of multiple VMs on System x3850 X6 machines please consult the SAP documentation. 11.1 11.1.1 Getting Started Memory Overhead CPU and memory overcommitment is not allowed in virtual HANA environments. For this reason memory has to be spared for the ESXi hypervisor to run and manage the virtual machines. A very conservative estimate for the amount of memory that needs to be unassigned the SAP HANA virtual machines for overhead is 3 to 4 percent. For example, on a system having 1 TB of RAM, approximately 30 to 40 GB would need to be left unassigned to the virtual machines. X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 128 Technical Documentation In a system with 1TB of RAM a single VM6 machine with 384GB RAM could be installed, leaving the rest of the system unused. Two VM6 machine would still leave enough unassigned memory for the hypervisor and virtual machine memory overhead. 11.1.2 Configure UEFI Apply the UEFI configuration as described in section 6: Guided Install of the Lenovo Solution on page 41. 11.1.3 Start Embedded VMware ESXi Hypervisor The VMware ESXi 5.5 hypervisor is to be installed on an USB pen drive. The drive is located at an internal USB port in the server. This prevents an unintended removal of the USB pen drive. Boot the server with attached USB pen drive. Enter BIOS and select Boot Manager Boot from embedded hypervisor . VMware ESXi 5.5 does not boot from a USB-Drive when the BIOS is in legacy mode. It must be in UEFI mode. 11.1.4 Configure Management Network of ESXi Hypervisor To be able to connect to the ESXi Hypervisor you have to configure the Management network. Per default the ESXi connects to the first available network adapter via dhcp. This is not always desired. 1. At the direct console of the ESXi host, press F2 and provide credentials when prompted. Figure 51: login to ESXi 2. Scroll to Configure Management Network and press X6 Implementation Guide 1.9.96-13 . Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 129 Technical Documentation Figure 52: configure management network 3.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 130 . Figure 53: display network adapters X6 Implementation Guide 1.9. In the first row you see Network Adapters. 9. Figure 55: IP configuration 5. Subnet Mask and Default Gateway X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 131 .Technical Documentation Figure 54: display network adapters 2 4. Scroll to IP Configuration and press . Set IP-Address. GW 6. press and scroll to DNS Configuration and press .Technical Documentation Figure 56: Set IP.NETMASK. 7. 9.9. 2015 132 . Set the DNS Suffix X6 Implementation Guide 1. Set primary and secondary DNS Server and Hostname and press .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Figure 57: Set DNS and Hostname 8. Scroll to Custom DNS Suffix and press . As a prerequisite SSH has to be enabled on the VMware ESXi 5. "Enable SSH" changes to "Disable SSH".1. You can install the StorCLI tool directly under VMware ESXi 5. You can enable remote command execution from the direct console or from the vSphere Client. The file can be placed anywhere where it is accessible to the ESXi console shell.9. 2015 133 . 11. Unzip file and change into support directory.1. remote command execution is disabled on an ESXi host. 4. Scroll to Troubleshooting Options and press 3.5 To be able to use the storage on an X6 machine you have to configure the RAID adapters. Copy the VIB to the ESXi server.ibm. press F2 and provide credentials when prompted. On the right. To enable SSH access in the direct console 1.5 Enable SSH on VMware ESXi Hypervisor By default. Choose "Enable SSH" and press .5. 2. "SSH is Disabled" changes to "SSH is Enabled".6.1 Installation Follow these steps to install the StorCLI utility: 1. On the left.Technical Documentation Figure 58: Set DNS suffix 11.6 StorCLI on VMware ESXi 5.1. 3. Issue the following command: X6 Implementation Guide 1. .5. In the following examples the file is located in /tmp. and you cannot log in to the host using a remote shell.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 11. 2. You can download the latest StorCLI version from http://www-947.com/support/entry/portal/ docdisplay?lndocid=migr-5092951. Copy the files VMware ESXi via SCP. At the direct console of the ESXi host. Press Esc until you return to the main direct console screen. which RAID levels. you can specify assignvds=X. Counting of the adapters starts with 0.2 Configure RAID and CacheCade with StorCLI You must configure the RAID setup and integration of the CacheCade VDs before you format the disks. disable native driver for Megaraid_sas: 1 esxcfg-module -d <mod-name> 5.9.1. To see the setup of the first adapter use 1 storcli /c0 show If you do not see any adapter although there is at least one installed. where 252:0-7 is an example list of drives used. 2015 134 . 1 storcli /c0/v1 set rdcache=ra iopolicy=cached wrcache=wb X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Create the CacheCade VD. 1 storcli /c0 add vd cc type=r1 drives=252:8-9 wb assignvds=1 Adjust the settings of the CacheCade VD. A reboot is required to apply the configuration changes. and /cX the controller. The parameter assignvds=X needs the VD ID of the RAID array created before. where 252:8-9 is an example list of the SSDs used. 11. and rX is the RAID level: 1 storcli /c0 add vd type=r5 drives=252:0-7 wb ra cached strip=64 cachevd All SSDs on a controller are used to create the CacheCade VD. Decide with the list below for every controller in the machine.6.Technical Documentation 1 esxcli software vib install -v=/tmp/<VIBFILE> --no-sig-check 4. and which number of RAID VDs you have to configure: • 6 HDDs: 1 RAID5 • 9 HDDs: 1 RAID5 • 10 HDDs: 1 RAID6 • 18 HDDs: 2 RAID5 • 20 HDDs: 2 RAID6 Create a RAID5 array. where /c0 is the RAID controller and /vX is the ID of the newly created CacheCade VD. you must change the SCSI driver in VMware ESXi. If you created two RAIDs on the controller. There can only be one CacheCade VD per controller. /cX the controller.Y. To see if the StorCLI works correctly apply following command: 1 storcli show all You should see a list of the installed RAID adapters and an overview. that is the network interface you applied the IP-Adress for the management network is always vswitch0. Virtual switches are necessary for the virtual machines to connect to each other or the outside world. Attention These steps delete all data on the disks. 2015 135 . To list the installed storage devices execute: 1 esxcli storage core device list To list all filesystems known to the ESXi hypervisor call 1 esxcli storage filesystem list Figure 59: ESXi 5. vSphere supports two types of switches.HDD" /vmfs/devices/disks/naa←. The latter is needed for vMotion.Technical Documentation 11.1. The device names will vary on your setup. This can be useful if you want to have an isolated VM. Example VMFS5 creation on a System x3850 X6: 1 vmkfstools --createfs vmfs5 --setfsname "hana 38 .1. Repeat the steps with every disk in the system you want to use for VMware. The VFAT filesystems belong to the USB device Create a VMFS5 filesystem on a partition. The communication uplink. However.600605b0038acb6018f17abe32a77168 This creates a VMFS5 filesystem on the CacheCade accelerated RAID5. 11. The standard switch (VSS) and the distributed switch (VDS). 1 2 3 4 ## adding Switches esxcli network vswitch standard add --vswitch-name=vSwitchGPFS --ports=24 esxcli network vswitch standard add --vswitch-name=vSwitchHANA --ports=24 esxcli network vswitch standard add --vswitch-name=vSwitchKOM --ports=24 5 6 # changing MTU X6 Implementation Guide 1. The VMs contact the physical adapters through vswitches.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.→.9. You have to open a SSH session on the ESXi hypervisor.7 Setting up ESXi Storage in CLI Since the ESXi Hypervisor runs on standard System x HANA Hardware there is no external storage attached.x filesystems on a System x3850 X6.8 setting up vswitches The core of VMware vSphere networking are virtual switches. since vMotion is not supported in this solution we will only describe standard switches. Create a backup if necessary. If you create a standard vswitch it does not have connection to physical interface per default. 1. To create an NFS mount login to the hypervisor via SSH and execute: 1 esxcli storage nfs add --host=<hostname> --share=/<mount_dir> --volume-name=<←.2 Setting up a local datastore A datastore is a directory on the ESXi hypervisor in which you copy the SLES and non-OS component ISOs.9 setting up nic bonding(teaming) Teaming must be set up on ESXi Hypervisor. Connect via SSH to the ESXi hypervisor.→vswitch-name=vSwitchHANA esxcli network vswitch standard portgroup add --portgroup-name=KOM_Network --vswitch←. X6 Implementation Guide 1. Teaming is always a HA teaming. 11. Setting up teaming in the VMs is useless.→-name=vSwitchKOM 11.1.10 Setting Storage for SLES and HANA ISO There are two ways to provide the needed ISOs for the virtual machines.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.10.1.1 Setting up NFS datastore It is easier to store the SLES for SAP 11 and the non-OS components ISOs on a separate filesystem and mount it via NFS on the ESXi hypervisor. 2015 136 .9.1. 1 2 3 esxcli network vswitch standard uplink add -u <vmnic> -v <vswitch> esxcli network vswitch standard policy failover get -v TEAM_network esxcli network vswitch standard policy failover set -l policy -v vswitchX 4 To see and set the failover policy in a vswitch you need following commands 1 2 esxcli network vswitch standard policy failover get -v vswitchX esxcli network vswitch standard policy failover set -l policy -v vswitchX 3 11.10. One is a NFS connected storage from an external source or a datastore on the server. All mounted volumes are available at /vmfs/volumes. To set up teaming you add a NIC to a vwsitch. Therefore the filesystems must be created first.→create_volume_name> To see the mounted NFS volumes execute: 1 esxcli storage nfs list 11.→vswitch-name=vSwitchGPFS esxcli network vswitch standard portgroup add --portgroup-name=HANA_Network --←.Technical Documentation 7 8 esxcli network vswitch standard set --mtu=9000 --vswitch-name=vSwitchGPFS esxcli network vswitch standard set --mtu=9000 --vswitch-name=vSwitchHANA 9 10 #adding portgroup 11 12 13 14 esxcli network vswitch standard portgroup add --portgroup-name=GPFS_Network --←. Note: If a security warning window opens. • Create a datastore directory (mkdir ISO). Complete the following steps to install VMware vSphere Client on a suitable system in your network.2: Configuring and Starting VMs with vSphere Client on page 138. skip to the next section 11.5 Storage Path Create a datastore named ISO on a VMFS5 volume. 1.Technical Documentation Figure 60: ESXi5. You have to authenticate before you can actually restart the hypervisor. it is strongly recommended that you use the VMware vSphere Client that matches the version of the SAP HANA system hardware’s VMware ESXi 5 hypervisor.11 Restart VMware ESXi Hypervisor To restart the ESXi 5. open a secure web connection (HTTPS) and enter the IP address of VMware ESXi 5 hypervisor in the browser address bar.5 hypervisor press F12 at the ESXi prompt.9.12 Installing VMware vSphere Client VMware vSphere Client is required to perform many of the tasks described in this document. 2015 137 . click the Ignore button. 3. 11. • Copy the SLES and non-OS components ISO to the datastore via SCP.1. Note If you have already added a host name to your DNS. • Change to a VMFS5 volume (cd).96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Boot the system hardware to the VMware ESXi 5 hypervisor. The IP address of the VMware ESXi 5 hypervisor is displayed on the console. 11.1. Note To avoid any unexpected behavior. you can use the host name instead of the IP address. If you already have an appropriate version of the VMware vSphere Client installed. Download the vSphere client and follow the on-screen instructions to install the client. On the Microsoft Windows system where VMware vSphere Client will be installed. X6 Implementation Guide 1. The VMware ESXi 5 welcome screen is displayed. 2. X6 Implementation Guide 1.Technical Documentation Figure 61: ESXi 5. The virtual machine is created with the aid of the vCenter GUI. Note The illustrations in this document might differ slightly from what you see on your screen.2 Configuring and Starting VMs with vSphere Client To configure and start the virtual machines.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 138 . and your user name and password and click the Login button. 2. complete the following steps. Open a secure web connection (HTTPS) to the vCenter server to the address https://<address to vCenter server>/vsphere-client/ 11. (b) On a new server. You can use the WEB-GUI as well. Type the IP address or host name of the host system.1 WEB Welcome Note VMware vCenter server also provides a web based vSphere Client that can be used. you might also see a warning that there is no datastore. 1. ignore the warning and install the certificate. if you prefer it. Log in to the VMware vSphere Client. (a) If a security warning window opens. too.9. ignore this warning. X6 Implementation Guide 1. Choose a name. 2015 139 .9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation Figure 62: Create new virtual machine 3. Choose Custom . Figure 63: Choose custom configuration 4. X6 Implementation Guide 1. Figure 65: Choose disk storage for VM files 6.9.Technical Documentation Figure 64: Choose a name 5. 2015 140 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Choose a datastore for the VM files. Choose the newest virtual machine version. Choose SUSE Linux Enterprise 11 (64-bit) . See step 6b: Configuring and Starting VMs with vSphere Client on page 141 for more details on upgrading the version using the Windows client. In order to run a virtual machine above 32 vCPUs. 7 or 8. you will only be presented with the possibility to choose a VM Hardware version of 6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation Figure 66: Newest virtual machine hardware version (a) Windows Based Client (Version 8) If you use the Vmware vSphere Microsoft Windows client.9. you must upgrade the VM hardware at the end or use the vCenter’s vSphere web client. 2015 141 . (b) Web Based Client (Version 9) Figure 67: Configure the use of more than 32 CPUs 7. X6 Implementation Guide 1. Technical Documentation Figure 68: Choose Operating System 8. Figure 69: Choose number of CPUs 9. first select the maximum of 32 now and change it following the directions in step 19: Configuring and Starting VMs with vSphere Client on page 148. Choose number of virtual CPUs according to table 48: SAP HANA Virtual Machine Sizes by Lenovo on page 128. 2015 142 . It is important to note that if you are using the vSphere Microsoft Windows Client. X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Choose memory according to table 48: SAP HANA Virtual Machine Sizes by Lenovo on page 128. you will not be able to configure a virtual machine over 32 vCPUs until you upgrade the VM hardware. If you wish to create a virtual machine using more that 32 vCPUs. Select the network cards.9. 2015 143 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1. Figure 71: Choose Network Cards 11. Choose the SCSI controller.Technical Documentation Figure 70: Choose Memory 10. Technical Documentation Figure 72: Choose SCSI controller 12. One for the OS.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1. and one for GPFS. 2015 144 . Figure 73: Create new HANA datastore (a) Choose the OS size according to table 48: SAP HANA Virtual Machine Sizes by Lenovo on page 128. Two disks needed for a VM. Disk layout for virtual machines. Create a new virtual disk.9. 13. Please see table 48 for required disk sizes. 2015 145 . the second to "SCSI (0:1)".Technical Documentation Figure 74: Choose datastore size (b) Choose a datastore for the OS.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9. X6 Implementation Guide 1. The first virtual disk you create is assigned to "SCSI (0:0)". and so on. Figure 75: Choose datastore (c) Choose the correct SCSI node. Figure 77: Add a new CD/DVD device (a) Select Datastore ISO File . 14. In the case that your virtual machine requires a drive size that is larger than the capacity of a single available device.9. Add a new CD/DVD device.Technical Documentation Figure 76: Choose SCSI Node (d) Finish the virtual drive creation. X6 Implementation Guide 1. Select Edit the virtual machine settings before completion to do this. 15. 2015 146 . you must repeat steps 13 through 13d to include the total amount of storage across multiple devices.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Repeat steps 13 to 13d for virtual disks for GPFS. 96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1. 2015 147 . and look for the "SLES for SAP ISO (NFS Mounted Datastore)".9.. You need two CD/DVD drives for the installation.Technical Documentation Figure 78: Select ISO image (b) Select Connect at power on . (c) Select Browse. (d) Select IDE (0:0) .. One for the SLES DVD ISO and one for the non-OS components ISO. Figure 79: Select IDE device 0:0 (e) Finish the creation of the SLES for SAP DVD. This is not possible during a virtual machine creation using the Microsoft Windows client. you must use the version 9 of the VMware virtual hardware. 17. Figure 81: Upgrade virtual hardware X6 Implementation Guide 1. Upgrading the Virtual Machine to VM Version 9 using Windows Client. 19.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. right mouse click on the newly created virtual machine in the vSphere client and select "Upgrade Virtual Hardware". Press OK to create the virtual machine. Press "Yes" and continue. 2015 148 . Repeat step 15 for a second CD/DVD and include the Lenovo HANA ISO. Both ISOs are best put into an NFS datastore that has been attached previously in the server settings of the VMware ESX server. Change the boot options to Boot to BIOS at the next reboot. After creating a virtual machine. If it is required to use more than 32 vCPUs in your SAP HANA virtual machine (sizes larger than 3 slots). 18.9.Technical Documentation Figure 80: Finish creation of SLES ISO mount 16. A pop-up will show asking you to confirm the upgrade. Create the non-OS components DVD. g. Every virtual machine has a VMX file. you will need to update the number of vCPUs required for this system. Upgrading the VM to VM version 10 using command line. You can find the VMX file for your VM with the command 1 ~ # find . This describes the upgrade of the virtual hardware. e. Edit and change following lines: 1 2 3 virtualHW. You may be in need to do this. Usually the format is <vmname>.9. -name '*. Choose the one you need and change into the directory.Technical Documentation Figure 82: Confirm upgrade (a) Increasing the number of virtual CPUs for larger VMs.g. more than 256GB RAM. Figure 83: Upgrade virtual hardware 20. it is mandatory to have SSH to ESXi hypervisor enabled. if you want to run the VM with large RAM. If you are installing a virtual machine larger than 3 slots. Open the VMX file with an editor (e. CPU and RAM if a vCenter is not available. Right mouse click on the newly created virtual machine in the left-hand side of the vSphere client and select Edit Settings .vmx.vmx' This will list all available VMs. vi). To accomplish this. We recommend to use 10 CPUs per socket for the SAP HANA virtual machine. The VM has to be shut down to do this.version = "10" memsize = "<sizeoframyouneed>" numvcpus = "<numofcpuyouneed>" To take the changes in effect you must reload the VM 1 ~# vim-cmd vmsvc/reload <vmid> X6 Implementation Guide 1. which contains all configuration data for the machine.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Select the CPUs and increase the number of virtual sockets and CPUs as required. 2015 149 . (b) Select the CPUs. 3: Phase 2 – SLES for SAP on page 53. Press Add "ks=cdrom://ks. for the kernel boot options. 2015 150 .cfg". Use to select the line "SLES for + SAP Applications .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. See figure 85: Adding kickstart parameter for install on page 151.9.6) Installation After starting the virtual machine the Installation prompt appears. See figure 84: Changing the autoyast parameter for installation on page 150. 11. Note Please continue with the installation instructions in section 6. X6 Implementation Guide 1.Installation with external profile". Figure 84: Changing the autoyast parameter for installation After that press .Technical Documentation 11. Move the cursor ( ) to the boot options and change the autoyast parameter to autoyast=device://sr1/. The installation will continue automatically.5 and 6.4 Operating System (Red Hat Enterprise Server 6.3 Operating System (SLES for SAP 11 SP3) Installation After starting the virtual machine the installation prompt appears. • Edit ifcfg-eth0 and remove the lines MASTER=bond0 and slave=yes.1 Changes after Red Hat Installation After the installation you need to login as root and perform following tasks: • Remove the file /etc/modprobe.9.conf.d/bonding. 2015 151 . • After reboot continue with installation as described in section 6. and ifcfg-eth3 in /etc/sysconfig/network-scripts.4. • Reboot the VM. • Remove the files ifcfg-bond0. and IP and name of hananode01. Note Please continue with the installation instructions in section 6. but also execute the steps in the following section. The installation will continue automatically. • The file ifcfg-eth0 file should look like this: 1 2 3 4 5 6 7 8 9 DEVICE=eth0 TYPE=Ethernet USERCTL=no ONBOOT=yes BOOTPROTO=none NM_CONTROLLED=no IPADDR=[IPADDR of Server] NETMASK=[netmask] IPV6INIT=no • The configuration for eth1 and eth2 is similar. Please keep in mind that eth1 is the GPFS network interface (gpfsnode01) and eth2 ist the HANA network interface (hananode01). X6 Implementation Guide 1.4: Phase 2 – RHEL on page 58.Technical Documentation Figure 85: Adding kickstart parameter for install After that press . IP and name of gpfsnode01.6: Phase 3 on page 62. ifcfg-bond1. • Edit /etc/hosts and add IP address and full name of your server. 11.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. prealloc parameter set to TRUE. It is manadatory to set the sched.76-0.0. It is sensible to allocate all memory at boot time. Usually the sched.max_cstate←.→=0 instmode=cd showopts vga=0x314 vmw_pvscsi.Technical Documentation 11. To make the changes permanent you have to add them to a boot script.→ring_pages=32 initrd /boot/initrd-3.vmx file Memory prealloccation. This is done with the sched.min parameter as well. If you do not do it the VM will fail to start. The vmxnet3 driver is available after the Installation of the VMware tools.5 11.cmd_per_lun=1024 vmw_pvscsi.1 Tuning of Operating System and VM Tuning of OS After Installation the scheduler should be NOOP.9. To check the scheduler for the running system run this command: 1 # cat /sys/block/sdb/queue/scheduler This checks for drive sdb.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. change the scheduler in a running system with this command 1 # echo noop > /sys/block/sdb/queue/scheduler Harddisk IO tuning Read/write operations on the HDDs can be improved if you adjust the device level read ahead and increase the number of io requests that get buffered 1 2 echo 4096 > /sys/block/sdb/queue/read_ahead_kb echo 4096 > /sys/block/sdb/queue/nr_requests The values are not permanent and will be lost after a reboot.5. Be careful with that.max_cstate=0 processor.→silent transparent_hugepage=never intel_idle.mem.11-default For low latency networking it is recommended to use the vmxnet3 network adapter and driver. 2015 152 .mem.76-0. Increase the percentage of memory.lst: 1 2 vmw_pvscsi. This is mandatory.1) kernel /boot/vmlinuz-3. If noop is not the scheduler. 1 2 echo 5 > /proc/sys/vm/dirty_background_ratio echo 10 > /proc/sys/vm/dirty_ratio In order to optimize and increase the queue depth of the pvSCSI driver inside the Linux-OS on which SAP HANA runs.2 Tuning of ESXi and VM paragraphparameters to the *. that can be filled with dirty pages.min parameter equalls the amount of memory in MB set to the VM X6 Implementation Guide 1. because most of the memory is occupied by SAP HANA.5. add the Linux kernel boot options below to /boot/grub/menu.11-default root=/dev/sda2 resume=/dev/sda3 splash=←.←.cmd_per_lun=1024 vmw_pvscsi.lst will look like this: 1 2 3 4 title Lenovo Systems Solution for SAP HANA root (hd0.ring_pages=32 The complete kernel line in /boot/grub/menu.mem.0. 11. This can cause latency issues if the needed memory segments are not in the near storage area. because these provide the vmxnet3 kernel driver.swap.vcpu.mem. to reduce latency is may be sensible to bind a virtual machine to a cpu.vcpu.cookie = "200001" numa. To resolve this NUMA (non uniform memory access) problem VMware has developed sophisticated NUMA aware schedulers. These are the numa. Leave the eth0 device to e1000. it takes a little longer for the VM to start.autosize.autosize.* parameters in the *. 2015 153 .mem. This is not a bug.networkName = "GPFS Network" The VM must be down to be able to change the parameters. On the Linux side you have to install the vmware tools.vmxSwapEnabled = "FALSE" If the parameter sched.virtualDev = "vmxnet3" ethernet1.Technical Documentation 1 2 3 sched. However.vmx configuration file for the VM you change the driver for the GPFS and HANA Ethernet card to vmxnet3 1 2 3 ethernet1.9.mem.latencySensitivity = "HIGH" NIC Optimization For perfomance and latency-sensitive VMs it is recommended to use the vmxnet3 vNIC driver.min = "xxx" sched. 1 2 3 esxcli system module parameters set -m igb -p "InterruptThrottleRate=0" esxcli network nic list esxcli system module parameters list -m <driver> X6 Implementation Guide 1. At the *. All system X servers are multicore servers.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.present = "TRUE" ethernet1.prealloc = "TRUE" sched.nodeAffinity = "0" numa.prealloc is set. 1 2 3 4 1 numa.cpu.vmx file of the VM.vcpu.preferHT = "TRUE" sched.maxPerVirtualNode is set automatically if the number of vcpu is more than 8.maxPerVirtualNode = "20" numa. The numa. A VM with more than 8 vCPUs is considered a wide virtual machine.autosize. ibm. A lack of understanding reboot behavior could cause the operator to suspect bad or misbehaving hardware or firmware and result in interrupting the required process.2 Reboot Behavior When installing or performing upgrades. refer to section ’Setting power supply power policy and system power configurations’ of the System x3850 X6 and x3950 X6 Installation and Service Guide19 . 12. The number and size of installed memory DIMMs affects the time between reboots. the operator should be prepared to expect multiple reboots during the POST process as the system performs the required configuration and setting changes. option) has most effect and may be as high as seven. or (only for 8U chassis) to add a second internal M5210 RAID controller. Firmware changes (primary bank. or after installing additional hardware options. secondary bank.9. Please note the different lines for stand-alone and scale-out that might list different numbers of RAID arrays.com/infocenter/systemx/documentation/topic/com. both.pdf X6 Implementation Guide 1.4: Card Placement on page 15).boulder.ibm.Technical Documentation 12 Upgrading the Hardware Configuration Note Please note that this chapter may differ for special setups like DR. 12. If your upgrade path requires new RAID controllers please follow the instructions in section 4.1 Power Policy Configuration Unless specified to manufacturing. systems shipped from the factory have default settings that may not meet customer desired settings.doc/nn1hu_ install_and_service_guide.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 19 http://publib. and 4 to 8 sockets are possible.3837.sysx. and number of RAID arrays. For more information on how to perform this task. Additional storage can mean either to add 9 HDDs to an existing storage expansion. Please note that the PCI-e slot assignment changes (section 4. Table 49: RAID array and RAID controller overview on page 155 lists defined models according to number of CPUs. 2015 154 . When scaling out a stand-alone installation (single server) to a cluster without changing the RAM it might be necessary to add additional storage to the servers. Upgrades from 2 CPU sockets to 4. There are several possibilities to upgrade IBM appliances. The number of reboots will vary depending upon the type (HW vs FW) and number of changes. It is strongly recommended that during pre-installation setup. the power policy and power management selections should be checked to insure: • Sufficient power is available for the configuration • The desired correct power redundancy and throttling settings have been selected Note Failure to properly set values can prevent the system from booting or log error events. This chapter is about standard appliance configurations. or to add a new storage expansion. You can either upgrade the RAM of your appliance (scale-up) or you can add servers to create or increase the size of a cluster (scale-out). Interrupting the process will result in increased time to complete the installation and may require service depending on what actions the operator has performed improperly. An upgrade from 4U chassis (x3850 X6) to 8U chassis (x3950 X6) is possible – with some extra efforts.4: Card Placement on page 15) are required. not the number. memory. not for Datamart and BW. ** EA = Number of RAID arrays on External M5120/M5225 RAID controllers. 2015 155 .5-2TB 512-1024GB 512GB 1-2TB 3-4TB 6TB 8TB 12TB 1TB 2TB 4TB 6TB 8TB 12TB IA* 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 EA** 0 0 1 0 1 1 2 3 1 2 2 3 4 5 0 0 0 0 0 0 1 2 2 5 0 1 3 5 6 10 M5120/M5225 0 0 0 0 1 1 1 2 1 1 1 2 2 3 0 0 0 0 0 0 1 1 1 3 0 1 2 3 3 5 Note [1] [1] [2] [4] [3] [4] [4] [3] [3] [3] [2] [2] [2] [3] [3] [4] [2] [3] [3] Table 49: RAID array and RAID controller overview * IA = Number of RAID arrays on Internal M5210 RAID controllers (excluding the RAID array for the OS).5-2TB 3-4TB 6TB 512-1024GB 1.Technical Documentation Chassis CPUs 2 Usage Standalone Scaleout Standalone x3850 X6 4 Scaleout 4 Standalone Scaleout Standalone x3950 X6 8 Scaleout Memory 128-512GB 256GB 512GB 256-512GB 768-1024GB 1. [1] = up to 4 nodes only [2] = For Suite on HANA only.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1. [3] = Not approved with SAP HANA [4] = For non-productive use only under relaxed HW requirements.5TB 2TB 3TB 4TB 6TB 256-512GB 768-1024GB 1. Install the M5210 in the server. (Skip this step when just adding storage to an existing EXP. Install the M5120/M5225 in the server.) 2. 4. 12.) 4.8: Configuring GPFS on page 159.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. you then need 1 HDD more per RAID array. 12.ibm. For more information on this topic and to see a reboot guideline chart. you will only add HDDs and no SSDs). Install the HDDs and SSDs. 3. you have the following options: • Add 9 HDDs to an already attached EXP. • Attach a new EXP to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2 SSDs. Please note: you can also configure RAID6 on the EXPs. Note All steps – except the installation of a new RAID controller – can be executed without downtime. remove AC power and wait for the LCD display and all Light Emitting Diodes (LEDs) to turn off.3: Configure RAID array(s) on page 157.3. Connect the EXP to power and via SAS cable to the RAID controller.e.3.3.2 Adding storage on second internal M5210 controller The second M5210 will be connected to 6 HDDs for a RAID5 and 2 SSDs for CacheCade. 5.9. i. 1. 20 http://www-947. 1. 12. 3.8: Configuring GPFS on page 159.3. 12.3 12.3: Configure RAID array(s) on page 157.com/support/entry/portal/docdisplay?lndocid=migr-5096873 21 For details on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1 Configuration X6 Implementation Guide 1. 2015 156 . • Attach a new EXP to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2 SSDs or install 2 additional SSDs into 1st EXP for CacheCade RAID121 . Install the HDDs and SSDs in the EXP.3. (When just adding storage. 12.3. refer to RETAIN tip MIGR509687320 . (Skip this step when just adding storage to an existing EXP. 12. 2.1 Adding storage Adding storage via EXP2524 Depending on your upgrade path.Technical Documentation Note Before adding or removing any hardware. 10 respectively 20 HDDs per EXP). and ID of the controller may vary in your setup. 1 2 3 4 : Controller = 1 Status = Success Description = None 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Product Name = ServeRAID M5120 : ------------------------------------------------------------------------EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp ------------------------------------------------------------------------8:1 18 UGood .Technical Documentation 12. Look in the output of storcli64 /call show for the controller with the unconfigured drives (UGood). 2015 157 . The command line tool storcli is installed on your appliance.371. It is following the scheme <Enclosure Device ID>:<Slot Number range>. configure it accordingly. Other versions’ syntax may vary.597 GB SAS SSD N Y 512B TXA2D20400GA6I U 8:2 19 UGood .089 TB SAS HDD N Y 512B ST1200MM0007 U 8:5 11 UGood 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 8:7 13 UGood 1.py -c Execute this command to configure the unconfigured HDDs into RAID arrays: saphana-raid-config.3.py -c Now continue with 12.371.089 TB SAS HDD N Y 512B HUC101212CSS60 U 8:4 10 UGood 1. The actual enclosure IDs (EID).089 TB SAS HDD N Y 512B ST1200MM0007 U 8:6 12 UGood 1. 1 storcli64 /c1 add vd type=raid5 drives=8:3-11 wb ra cached pdcache=off strip=64 If you have to configure a second RAID5 array. where 8:3-11 is an example list of the HDDs used.089 TB SAS HDD N Y 512B HUC101212CSS60 U 8:11 17 UGood 1.py -u Execute this command to activate the CacheCade also on the newly created RAID arrays: saphana-raid-config.597 GB SAS SSD N Y 512B TXA2D20400GA6I U 8:3 9 UGood 1. Execute this command to adjust the CacheCade settings: saphana-raid-config. X6 Implementation Guide 1. slot numbers (Slt).py. Note All commands were tested with storcli version 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U ------------------------------------------------------------------------: Create the RAID5.80-12 (and later) come with the tool saphana-raid-config. Use the following three commands instead of the manual configuration described in the next chapters. /c1 stands for controller 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 8:9 15 UGood 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.8.3.3 Configure RAID array(s) Note Appliance version 1.9.089 TB SAS HDD N Y 512B ST1200MM0007 U 8:10 16 UGood 1. It will be used to configure the RAIDs.07.07.089 TB SAS HDD N Y 512B ST1200MM0007 U 8:8 14 UGood 1.8: Configuring GPFS on page 159. • 1 M5210: only RAID0 • 1 M5210 + 1 M5120/M5225 (with 2 SSDs): only RAID0 • 1 M5210 + 1 M5120/M5225 (with 4 SSDs): RAID0 or RAID1 • 1 M5210 + 2 or more M5120/M5225: RAID0 or RAID1 • 2 M5210: RAID0 or RAID1 Please keep in mind that all CacheCade VDs must have the same RAID level. If you created 2 RAID5 arrays.9. and /vX the RAID5 array: 1 storcli64 /c1/v2 set ssdcaching=on 12.6 Configuring RAID array with existing CacheCade When you added storage to an existing EXP the CacheCade VD is already configured.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 8:1-2 is an example list of SSDs used. 12. use assignvds=X.3. where /cX is the RAID controller and /vX is the ID of the CacheCade VD. where assignvds=X is the RAID 5 (with X as the Logical/Virtual Drive ID).3. Assign the CacheCade VD to the newly created RAID5 array. find the CacheCade VD ID and the slots of the SSDs. At first. 1 storcli64 /c1 add vd cachecade type=raidX drives=8:1-2 wb assignvds=0 Adjust settings of the CacheCade device. This means that you have to recreate existing CacheCade arrays that have the wrong RAID level.Technical Documentation 12.7 Changing the CacheCade RAID Level To change the RAID level of an existing CacheCade VD you have to delete and recreate the CacheCade VD. 1 storcli64 /c0 show Now delete the CacheCade VD.3.4 Deciding for a CacheCade RAID Level You can configure the CacheCade RAID arrays either with RAID1 or RAID0. where /cX is the RAID controller. where /cX is the controller.3. To decide for the RAID level (raidX) see the previous section. Depending on the hardware setup you have to decide which RAID level you have to configure. Use the following command.5 Configuring RAID array when CacheCade is not yet configured Create the CacheCade device. where /vX is the CacheCade VD (with X as the Logical/Virtual Drive ID): 1 storcli64 /c1/v1 set rdcache=ra iopolicy=cached wrcache=wb 12. 2015 158 .Y to assign the CacheCade VD to both arrays. 1 storcli64 /c0/v1 delete cachecade X6 Implementation Guide 1. data.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.list. For example: If you are on gpfsnode01.list.new -v no Attention The following command must only be executed on stand-alone configurations. where /cX is the RAID controller and drives=12:1-2 is an example list of SSD drives used. where /cX is the RAID controller and /vX is the ID of the newly created CacheCade VD.new -v no mmadddisk sapmntdata -F /var/mmfs/config/disk.sh Please set the Shared quota to 8187 GB Please set the Data quota to 3072 GB Please set the Log quota to 1024 GB 5 6 7 8 9 Use the following command(s) to set the quota(s) mmsetquota sapmntdata:hanadata --block 3072G:3072G mmsetquota sapmntdata:hanalog --block 1024G:1024G mmsetquota sapmntdata:hanashared --block 8187G:8187G X6 Implementation Guide 1. 1 2 3 4 5 6 %nsd: device=/dev/sdX nsd=dataYYnodeZZ servers=gpfsnodeZZ usage=dataAndMetadata failureGroup=10ZZ pool=system Execute 1 2 mmcrnsd -F /var/mmfs/config/disk. Do not execute it in a cluster environment! mmrestripefs sapmntdata -b This will balance the data between the used and unused disks equally. ZZ is the node number (e. 01 in gpfsnode01). execute mmlsnsd | grep gpfsnode01 to find out the names that are already in use for the existing NSDs. lsscsi.data.gpfsnodeZZ. Create a stanza file (/var/mmfs/config/disk.data. mmlsnsd -X. 2015 159 . Repeat this block for all newly created RAID arrays accordingly.g.list. 1 storcli64 /c2/v1 set rdcache=ra iopolicy=cached wrcache=wb 12. Find the name of the new NSD(s). and lsblk may be helpful.Technical Documentation Create the deleted CacheCade again.3. Run the quota calculator and you will see a result like this: 1 2 3 4 # saphana-quota-calculator.8 Configuring GPFS Find the block device that belongs to the newly created RAID array.9. 1 storcli64 /c2 add vd cachecade type=raid1 drives=12:1-2 wb Adjust the settings of the CacheCade VD.gpfsnodeZZ. Change the GPFS quotas to match the new requirements.gpfsnodeZZ.new) containing the information about the new GPFS NSD(s). 2015 160 . 12 DIMM Slots 13. 6 DIMM Slots 1. 10 DIMM Slots 15. 18 16 3 3 7 7 7 7 7 7 7 7 7 7 4 Sockets 32 48 64 3 3 3 3 3 3 3 3 3 3 3 3 7 3 3 7 3 3 7 7 3 7 7 3 7 7 7 7 7 7 7 7 7 7 7 7 96 3 3 3 3 3 3 3 3 3 3 3 3 32 3 3 7 7 7 7 7 7 7 7 7 7 64 3 3 3 3 7 7 7 7 7 7 7 7 8 Sockets 96 128 3 3 3 3 3 3 3 3 3 3 3 3 7 3 7 3 7 7 7 7 7 7 7 7 192 3 3 3 3 3 3 3 3 3 3 3 3 Table 51: x3950 X6 Memory DIMM Placement After the installation of additional memory. and 51: x3950 X6 Memory DIMM Placement on page 160 show which slots must be populated for specific configurations.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 11 DIMM Slots 14. 22 DIMM Slots 21. When the customer decides for a scale-up. 11 DIMM Slots 14.9. 12 DIMM Slots 13. 23 DIMM Slots 20. adding RAM to the server(s). 5 DIMM Slots 2. i. 17 DIMM Slots 7. 17 DIMM Slots 7. 5 DIMM Slots 2. 22 DIMM Slots 21. 18 8 3 3 7 7 7 7 7 7 7 7 7 7 2 Sockets 16 24 32 3 3 3 3 3 3 3 3 3 3 3 3 7 3 3 7 3 3 7 7 3 7 7 3 7 7 7 7 7 7 7 7 7 7 7 7 48 3 3 3 3 3 3 3 3 3 3 3 3 16 3 3 7 7 7 7 7 7 7 7 7 7 4 Sockets 32 48 64 3 3 3 3 3 3 3 3 3 3 3 3 7 3 3 7 3 3 7 7 3 7 7 3 7 7 7 7 7 7 7 7 7 7 7 7 96 3 3 3 3 3 3 3 3 3 3 3 3 Table 50: x3850 X6 Memory DIMM Placement DIMMs per server DIMM Slots 9. 24 DIMM Slots 19.4 Adding memory Note The installation of additional memory requires a system downtime. you have to follow the memory DIMM placement rules for IBM X6 servers to get the best performance. the SAP HANA’s global allocation limit must be reconfigured. The DIMMs must be placed equally over all CPU books – each CPU book must contain the same amount of DIMMs in the same slots. 24 DIMM Slots 19. DIMMs per server DIMM Slots 9. 4 DIMM Slots 3. 4 DIMM Slots 3.e. 23 DIMM Slots 20.Technical Documentation 12. 10 DIMM Slots 15. 16 DIMM Slots 8. 16 DIMM Slots 8. The number of memory DIMMs can be computed by "RAM size"/"DIMM size". 6 DIMM Slots 1. X6 Implementation Guide 1. Tables 50: x3850 X6 Memory DIMM Placement on page 160. 1 mmchfs sapmntdata -A no 2. 1 mmmount sapmntdata 12. The following upgrade paths are possible: • x3850 X6. If you only have the standard GPFS filesystem the following command is enough. Power off the machine. Review the network settings. 4 sockets → x3950 X6. 4 sockets → x3950 X6. On RHEL: Save the file /etc/udev/rules. 5. X6 Implementation Guide 1. 1 mmchfs sapmntdata -A yes 11. 8 sockets.d/99-ibm-saphana-persistent-net. Disable the GPFS auto-mount for your GPFS filesystems. On SLES for SAP: Save the file /etc/udev/rules. Reboot the machine. 6. (See 12. 8 sockets • x3850 X6.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.sh -sw 8. 3. Adopt the PCI-e card placement according to the tables in section 4. 4 sockets • x3950 X6. 9. including the exchange of the 4U chassis to a 8U chassis • x3850 X6.4: Adding memory on page 160. 10.d/71-ibm-saphana-persistent-net. 4 sockets. change the configuration for them accordingly.) 4. At last start the HANA database. Please make sure that the memory DIMMs are placed correctly in the CPU books. Mount the GPFS filesystem by hand. Execute 1 saphana-udev-config.rules to another location.Technical Documentation 12. Place the new CPU books in the server.9.5 Adding CPU Books Note The installation of additional CPU books requires a system downtime. including the exchange of the 4U chassis to a 8U chassis Follow these steps to add additional CPU books to a server: 1. 2015 161 . Enable the GPFS auto-mount option for you GPFS filesystems again. 7. If you have more GPFS filesystems. 2 sockets → x3850 X6. 4 sockets → x3950 X6.4: Card Placement on page 15. Power on the machine.rules to another location. 1 Warning Please be careful with updates of the software stack. (on the target node) Check GPFS cluster health Before performing any updates on any node. Before performing a rolling update (non-disruptive one node at a time update) in a cluster environment make sure that your cluster is in good health and all server nodes and storage devices are running.2. 1. Then there are two ways presented.9. 13. Lenovo currently recommends to use /sapmnt.96-13 the mount point for the GPFS file system sapmntdata is user configurable during installation.1 General per node update procedure This is the generic version for any kind of updates which require a system restart. the file system will be shut down causing all other SAP HANA nodes to fail. but make sure that all other disks are up. either disruptive with a downtime. while SAP promotes /hana. Warning If disks of more than one server node are down. where one node is updated at a time and then re-added to the cluster. First check that all nodes are running with the command 1 # mmgetstate -a and check that all nodes are active. 2. 2015 162 . For any other path please replace /sapmnt with the chosen path. (on the target node) Shutdown SAP HANA Shutdown the SAP HANA and the sapstartsrv daemon via 1 # service sapinit stop X6 Implementation Guide 1.9. either because you are affected by a bug or have a security concern and only after Lenovo or SAP support advised you to upgrade or after requesting approval from support via the SAP OSS Ticket System on the queue BC-OP-LNX-LENOVO. Please update the software and driver components only with a good reason.Technical Documentation 13 Software Updates Note Starting with appliance version 1. The following commands and code snippets use /sapmnt. Be defensive with updates as updates may affect the proper operation of your SAP HANA appliance and the System x SAP HANA Development team does not test every released patch or update.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. SAP HANA will be also installed into this path. 13.2 Update Variants This subsection gives an overview of the procedure in general. or rolling. how one could update in a cluster environment. verify that the cluster is in a sane state. 13. how updates should be applied. then verify that all disks are active 1 # mmlsdisk -e The disks on the node to be taken down do not need to be in the up state. e. Restart GPFS If you did not restart the whole server in step 6. running shells (root.g. If that happens use 1 # lsof /sapmnt to find processes still accessing the file system. Other nodes within the cluster can still mount the shared file system. 2015 163 . (on the target node) Unmount the GPFS file system Unmount locally the shared file system 1 # mmumount sapmntdata and take care that no open process is preventing the file system from unmounting. Restart the system Restart the server if necessary. GPFS and SAP HANA should start automatically during reboot. Shutdown GPFS 1 # mmshutdown 5. 4. Start SAP HANA 1 # service sapinit start 10. 3. Mount the file system if not already mounted.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Skip step 7. 6. etc. 7.). You may mount the file system after starting GPFS 1 # mmmount sapmntdata 9. <SID>adm. restart them with the command 1 # mmchdisk sapmntdata start -a If disks are suspended you can resume them all with the command 1 # mmchdisk sapmntdata resume -a X6 Implementation Guide 1.Technical Documentation Verify that SAP HANA and sapstartsrv are not running anymore: 1 2 # ps ax | grep sapstart # ps ax | grep hdb No processes should be found. Perform upgrades Do now the necessary updates. start GPFS 1 # mmstartup 8. if any processes are found please retry stopping SAP HANA. (on any node) Verify GPFS disks Verify all GPFS disks are active again 1 # mmlsdisk sapmntdata -e If any disks are down. 3 Full Cluster Rolling Update This update procedure applies when you are performing updates which either need a server restart like Linux kernel update or need a restart of specific server software (e.list should look like this 1 2 3 4 # Keep packages for RHEL 6.Technical Documentation Afterwards check the disk status again. SAP HANA is only released for dedicated RHEL versions.5 and for RHEL 6.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.2 Disruptive Cluster Update In the disruptive cluster update scenario.2. The idea of a rolling update is to update only one server at a time and after the server is back online in the cluster.3 RHEL versionlock RHEL has a mechanism to lock the versions of specified packages.x86_64 kernel-2.6. This will cause a downtime.5 to RHEL 6.g.2-1. Do not use the -b parameter! 12. By doing so.5 (begin) # https://css.corp/sap/support/notes/2013638 libssh2-1.d/versionlock. 1 # mmcheckquota -a 13. Without this mechanism you would update from RHEL 6. Continue with the next node 13. 13. For updating the SAP HANA software in a SAP HANA cluster.32-431.* X6 Implementation Guide 1. You can find examples for RHEL 6. 11.6 doing a ’yum update’ without further notice.4.el6.wdf. please refer to the SAP HANA Technical Operations Manual. one would shutdown the whole cluster an apply all updates. This can be done independent of other updates.5 the file /etc/yum/pluginconf. (on any node) GPFS Restripe Start a restripe so that all data is replicated proper again 1 # mmrestripefs sapmntdata -r Warning Currently the FPO feature used in the appliance is not compatible with file system rebalancing. GPFS) on affected nodes. 2015 164 .d/versionlock. 13. proceed with the next node in the same way. Therefore it is advisable to restrict the update for the kernel-version. Therefore it is recommended that you run mmcheckquota to restore the accurate usage count after the file system is no longer ill-replicated. you can avoid having downtimes. Restore accurate usage count If a file system was ill-replicated the used block count results from mmcheckquota may not be accurate.6 below. If it is not already done this mechanism can be activated by installing two packages and creating a file /etc/yum/pluginconf.2.sap.list in the following way: 1 yum -y install yum-versionlock yum-security For RHEL 6. 6 (end) To allow later updates (like kernel-updates).list.6.rpm • kernel-source-<kernelversion>.0. Possible methods include command line based tools like rpm -Uvh or CLI/X11 based GUI tools like SUSE’s YaST2.* kernel-headers-2. Possible update sources include updating by using kernel RPMs copied onto the target server.6.101-0. using a corporate-internal installed SLES update server/repository or by using Novell’s update server via the Internet (requires registration of the installation). you need to update at least the following files: • kernel-default-<kernelversion>.6.1 is mandatory for SLES for SAP 11 SP3. Otherwise the system will not work anymore! 13.1 SLES Kernel Update Methods There are multiple methods to update a SLES for SAP installation.5 (end) or for RHEL 6.rpm • kernel-default-devel-<kernelversion>. it is mandatory to recompile the GPFS portability layer kernel module. Please refer also to the following SAP notes 2013638 – SAP HANA DB: Recommended OS settings for RHEL 6. 2015 165 .Technical Documentation 5 6 7 8 9 kernel-firmware-2. After the update it is necessary to create similar restrictions for the updated packages using the new package versions.6 (begin) # https://css.32-431.6.* kernel-devel-2.32-431.4.suse.32-504.* kernel-devel-2. you have to delete all lines containing restrictions for that update case in the file versionlock. Please refer to Novell’s official SLES documentation.com/ documentation/sles11/.4 Linux Kernel Update At the time this document is created.6.* redhat-release-* # Keep packages for RHEL 6.* kernel-headers-2. A good starting point is the chapter "Installing or Removing Software" in the SLES 11 Deployment guides obtainable from https://www.sap.el6.32-504.32-431.x86_64.47.x86_64.5 and 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6.rpm • kernel-syms-<kernelversion>.32-504.rpm X6 Implementation Guide 1.9.x86_64.x86_64.x86_64 kernel-2.6. If you decide to update from RPM files.x86_64.rpm • kernel-default-base-<kernelversion>.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.* redhat-release-* # Keep packages for RHEL 6.32-504* kernel-firmware-2.corp/sap/support/notes/2136965 libssh2-1. kernel version 3. Please consult SAP if there is now a higher version recommended.4.6.2-1. Warning If the Linux kernel is updated.52.6 like this 1 2 3 4 5 6 7 8 9 # Keep packages for RHEL 6.wdf.6 13. x86_64.rpm • kernel-xen-devel-<kernelversion>. 13.rpm • kernel-headers-<kernelversion>.redhat. A good starting point is the Red Hat Deployment Guide22 (chapter 27 "Manually Upgrading The Kernel"). Possible update sources including updating by using kernel RPMs copied onto the target server.el6.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index. html X6 Implementation Guide 1. and http://www.redhat. Login in as root on each node and execute 1 # service sapinit stop 22 https://access.el6.2 RHEL Kernel Update Methods There are multiple methods to update a RHEL installation.redhat.com/security/ updates/.rpm • kernel-devel-<kernelversion>. 13. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node cleanly.3 Kernel Update Procedure Step 1 2 3 4 5 6 Title Stop SAP HANA Unmount GPFS file systems.noarch.com/docs/manuals/RHNetwork/ Download the kernel RPMs necessary for your system. Please refer to chapter 13. Red Hat recommends to keep the old kernel packages as a fallback in case there are problems with the new kernel. if a versionlock mechanism is implemented and how to allow kernel updates.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.el6. using a corporate-internal installed RHEL update server/repository or by using Red Hat’s update server via the Internet (requires registration of the installation).rpm Updating using YAST is recommended over updating from files.rpm4 There are two sources for Kernel upgrades on Red Hat Linux: http://www.4.rpm • kernel-firmware-<kernelversion>.x86_64. 2015 166 . stop GPFS Update Kernel Packages Build new GPFS portability layer Restart GPFS & check GPFS status Start SAP HANA 3 Table 52: Upgrade GPFS Portability Layer Checklist 1.x86_64.Technical Documentation • kernel-trace-devel-<kernelversion>.x86_64. you need to update at least the following files • kernel-<kernelversion>.9.el6. Updating using repositories is recommended over updating from files. Please refer to Red Hat’s official RHEL documentation.4. If you decide to update from RPM files.x86_64.3: RHEL versionlock on page 164 how to check. 5 Updating GPFS Note Upgrading GPFS requires a rebuild of the portability layer. Make sure no process has files open on /sapmnt.sap. stop GPFS 1 2 # mmumount all # mmshutdown 3. Update Kernel Packages Please update now the kernel by your preferred method.sap.com/hana 24 https://service.9. Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal23 or SAP Service Marketplace24 . Restart GPFS & check GPFS status 1 2 3 4 # # # # mmstartup mmmount all mmgetstate mmlsmount all 6.Technical Documentation Older versions of the appliance may not have this script.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.com/hana X6 Implementation Guide 1. so please start SAP HANA and other SAP software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal or SAP Service Marketplace. Unmount GPFS file systems. Build new portability layer 1 2 3 4 # # # # cd /usr/lpp/mmfs/src/ make Autoconfig make World make InstallImages 5. Start SAP HANA using 1 # service sapinit start Older versions of the appliance may not have this script. so please stop SAP HANA and other SAP software manually. 4. 2015 167 . The same applies if the Linux kernel was upgraded. 23 https://help. you can test that with the command: 1 # lsof /sapmnt 2. 13. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node cleanly.docs-<newgpfsversion>.sap. stop GPFS 1 2 # mmumount all -a # mmshutdown -a 3.base-<newgpfsversion>.noarch.gpl-<newgpfsversion>. Login in as root on each node and execute 1 # service sapinit stop Older versions of the appliance may not have this script.update. Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal25 or SAP Service Marketplace26 . Upgrade to new GPFS version.x86_64.sap. you can test that with the command: 1 # lsof /sapmnt 2.rpm gpfs.gpl.Technical Documentation 13. 2015 168 . stop GPFS Upgrade to new GPFS Version Build new GPFS portability layer Update cluster and file system information Restart GPFS.9. Make sure no process has files open on /sapmnt. This step may be skipped if only the portability layer needs to be re-compiled due to a Linux kernel update. mount GPFS file systems Check Status of GPFS Start SAP HANA 3 Table 53: Upgrade GPFS Portability Layer Checklist 1.1 Disruptive GPFS Cluster Update Step 1 2 3 4 5 6 7 8 Title Stop SAP HANA Unmount GPFS file systems.noarch.msg.com/hana X6 Implementation Guide 1. so please stop SAP HANA and other SAP software manually.rpm gpfs.) 1 2 3 4 # # # # rpm rpm rpm rpm -Uvh -Uvh -Uvh -Uvh gpfs.en_US-<newgpfsversion>.com/hana 26 https://service.rpm 4.noarch.rpm gpfs. (Replace <newgpfsversion> with GPFS version number of the update. Build new portability layer 1 2 3 4 # # # # cd /usr/lpp/mmfs/src/ make Autoconfig make World make InstallImages 5. Unmount GPFS file systems.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Update cluster and file system information to current GPFS version 25 https://help.5. 9. Start SAP HANA using 1 # service sapinit start Older versions of the appliance may not have this script.5. 1. but make sure that all other disks are up. Shutdown SAP HANA Shutdown the SAP HANA and the sapstartsrv daemon via 1 # service sapinit stop Verify that SAP HANA. Warning If disks of more than one server node are down. if any processes are found please retry stopping SAP HANA and any other process accessing /sapmnt.xx-x86_64-Linux.X. 2. so please start SAP HANA and other SAP software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal or SAP Service Marketplace. sapstartsrv and any other process accessing /sapmnt are not running anymore: 1 # lsof /sapmnt No processes should be found. verify that the cluster is in a sane state.Technical Documentation 1 2 3 4 # # # # mmchconfig release=LATEST mmstartup -a mmchfs sapmntdata -V full mmmount all -a 6. 2015 169 . then verify that all disks are active: 1 # mmlsdisk -e The disks on the node to be taken down do not need to be in the up state.0. 13. Check Status of GPFS 1 2 3 # mmgetstate -a # mmlsmount all -L # mmlsconfig | grep minReleaseLevel 7. First check that all nodes are running with the command 1 # mmgetstate -a and check that all nodes are active. Check GPFS cluster health Before performing any updates on any node. the file system will be shut down causing all other SAP HANA nodes to fail. please distribute the GPFS update package (GPFS-3.gz) on all nodes and extract the tar-ball before starting.1 Rolling GPFS Upgrade per node procedure To minimize downtimes.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Unmount the GPFS file system Unmount locally the shared file system X6 Implementation Guide 1. tar. 3.1. Mount file systems Mount the file system after starting GPFS: 1 # mmmount sapmntdata 8.msg.X. 2015 170 . 7. this changes to active when GPFS completed startup.noarch. 4. Other Nodes within the cluster can still mount the shared file system.g.en_US-3. Execute the following commands 1 2 3 4 # # # # rpm rpm rpm rpm -Uvh -Uvh -Uvh -Uvh gpfs.noarch.0-xx. 5.xx-x86_ 64-Linux. Shutdown GPFS 1 # mmshutdown GPFS should unload its kernel modules during its shutdown.Technical Documentation 1 # mmumount sapmntdata and take care that no open process is preventing the file system from unmounting. Update GPFS Software Change to the directory where you extracted the GPFS Update package GPFS-3.X.gpl-3.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9.update.gz where X and xx denote the desired target GPFS version.0-xx.rpm gpfs. e.) close them and retry.gpl.tar. Restart GPFS 1 # mmstartup Verify that the node started up correctly 1 # mmgetstate During the startup phase the node is shown in the state arbitrating.0-xx.X.X.X.rpm gpfs. (on the target node) Start SAP HANA 1 # service sapinit start 9.rpm gpfs. etc. restart them with the command X6 Implementation Guide 1. If that happens use 1 # lsof /sapmnt to find processes still accessing the file system.docs-3.rpm Afterwards the GPFS Linux kernel module must be recompiled: 1 2 3 4 # # # # cd /usr/lpp/mmfs/src/ make Autoconfig make World make InstallImages 6. running shells (root.base-3. <SID>adm.0-xx. (on any node) Verify GPFS disks Verify all GPFS disks are active again: 1 # mmlsdisk sapmntdata -e If any disks are shown as down.x86_64. so check the output of this command.0.noarch. Restore accurate usage count If a file system was ill-replicated the used block count results from mmcheckquota may not be accurate. 1 # mmcheckquota -a After all nodes are updated you can update the GPFS cluster configuration and the GPFS "on disk format" (the data structures written to disk) to the newer version. Update the file system’s on disk format to activate new functionality 1 # mmchfs sapmntdata -V full Notice that a successful upgrade of the GPFS on disk format to a newer version will make a downgrade to previous GPFS versions impossible. Cluster installations can be upgraded either all at once (disruptive) or node-by-node (rolling). Therefore it is recommended that you run mmcheckquota to restore the accurate usage count after the file system is no longer ill-replicated.6 # mmlsfs sapmntdata -V Upgrading from GPFS 3. Additionally.1 This section applies to single node and cluster installations. If the DR site hosts a non-productive SAP HANA instance this approach can be used to verify the new code level in pre-production.5 to 4. Not all updates require this update steps but it is safe to do them in any case. Update the cluster configuration with the newest settings 1 # mmchconfig release=LATEST 2.9. it is possible to upgrade the DR site first and the primary site at a later point. Continue with the next node 12. (on any node) GPFS Restripe Start a restripe so that all data is replicated proper again 1 # mmrestripefs sapmntdata -r Warning Currently the FPO feature used in the appliance is not compatible with file system rebalancing. DR installations can also be upgraded either all at once (disruptive) or node-by-node (rolling). For single node installations only a disruptive upgrade can be done. 2015 171 .Technical Documentation 1 # mmchdisk sapmntdata start -a If disks are suspended you can resume them all with the command 1 # mmchdisk sapmntdata resume -a Afterwards check the disk status again. Do not use the -b parameter! 11. You can verify the minimum needed GPFS version with the command 1 13. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. This update is non-disruptive and can be performed while the cluster is active. 1. 10. For further information.Technical Documentation Note GPFS 4. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node cleanly.5 packages and install new 4. Get a list of all installed GPFS 3.1.sap.0-8).1 is only supported with PTF 8 or higher (that is 4. 2015 172 .sap. including how to migrate licenses. GPFS has introduced three editions with different content. stop GPFS Remove GPFS 3.5 packages 1 # rpm -qa | grep gpfs Remove all GPFS 3. Existing GPFS 3. Remove all GPFS 3. stop GPFS processes 1 2 # mmumount all -a # mmshutdown -a 3.6. see GPFS FAQ27 13. Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal28 or SAP Service Marketplace29 . install GPFS 4.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1 packages Build new GPFS portability layer Update cluster and file system information Restart GPFS.1 packages.doc/gpfs_faqs/ gpfsclustersfaq.com/hana 29 https://service.ibm.1 Step 1 2 3 4 5 6 7 8 Title Stop SAP HANA Unmount GPFS file systems.cluster.5 to 4.gpfs. GPFS 4.1 Disruptive Upgrade from GPFS 3.html#migto41 28 https://help.ibm. Unmount GPFS file systems. If you have a gpfs. Login in as root on each node and execute 1 # service sapinit stop Older versions of the appliance may not have this script. Make sure no process has files open on /sapmnt.5 clients are entitled to GPFS 4.com/support/knowledgecenter/api/content/SSFKCN/com.9. mount GPFS file systems Check Status of GPFS Start SAP HANA 3 Table 54: GPFS Upgrade Checklist 1.1 Standard Edition.5 packages. Make sure you have the required GPFS packages before continuing.1 Standard Edition is required (Express is not sufficient).com/hana X6 Implementation Guide 1.ext RPM file then you have Standard Edition. so please stop SAP HANA and other SAP software manually.5 packages returned from above command 27 http://www-01. you can test that with the command: 1 # lsof /sapmnt 2. 1.0-8. 1 2 3 4 5 # # # # # mmstartup -a mmchconfig release=LATEST mmchcluster --ccr-enable mmchfs sapmntdata -V full mmmount all -a 6. X6 Implementation Guide 1.9.update.Technical Documentation 1 # rpm -e gpfs.50-16.x86_64.msg. Build new portability layer 1 2 3 4 5 # cd /usr/lpp/mmfs/src/ # make Autoconfig # make World # make InstallImages (optionally) # make rpm 5.0-8.base-4. so please start SAP HANA and other SAP software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal or SAP Service Marketplace. Update cluster and file system information to current GPFS version.1.0-8.ext-4.rpm gpfs.0-8. 2015 173 .ext-4.rpm gpfs.rpm 4. Start SAP HANA using 1 # service sapinit start Older versions of the appliance may not have this script.noarch.gpl-4.1.rpm gpfs.1 packages 1 2 3 4 5 6 # # # # # # rpm rpm rpm rpm rpm rpm -ivh -ivh -ivh -ivh -ivh -ivh gpfs.x86_64.gplbin package if you have that installed.0-0.en_US-4.msg.1.base-4.0.noarch.x86_64.1.rpm gpfs.gpl-4.noarch. Check Status of GPFS 1 2 3 # mmgetstate -a # mmlsmount all -L # mmlsconfig | grep minReleaseLevel 7.docs-4. Please update to the PTF recommended at this point in time.en_US Optionally.noarch.rpm Update to GPFS 4.1.0-0.en_US-4. also remove a gpfs.1. Activate new cluster configuration repository (CCR) feature.1. Install GPFS 4.base gpfs.docs-4.noarch.0-8.1 PTF 8 This is just an example.rpm gpfs.0-0.gskit-8.docs gpfs.rpm gpfs.x86_64.gpl gpfs.rpm gpfs.rpm gpfs.update.msg.noarch.1.0-0.x86_64. 1 2 3 4 5 # # # # # rpm rpm rpm rpm rpm -Uvh -Uvh -Uvh -Uvh -Uvh gpfs.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1.rpm gpfs.0-0. 96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 4. Upgrade GPFS to 4.Technical Documentation 13.) close them and retry. 1. 2015 174 . <SID>adm. First check that all nodes are running and active with the command 1 # mmgetstate -a Then verify that all disks are active 1 # mmlsdisk -e The disks on the node to be taken down do not need to be in the up state. running shells (root.5 to 4. Other Nodes within the cluster still have /sapmnt mounted.9.g. so check the output of this command carefully. Shutdown SAP HANA Shutdown the SAP HANA and the sapstartsrv daemon via 1 # service sapinit stop Verify that SAP HANA. verify that the cluster is in a sane state. Unmount the file system on the node to be upgraded 1 # mmumount sapmntdata and take care that no open process is preventing the file system from unmounting. 5.5 packages 1 # rpm -qa | grep gpfs X6 Implementation Guide 1. etc. the file system will be shut down causing all other SAP HANA nodes to fail. sapstartsrv and any other process accessing /sapmnt are not running anymore: 1 # lsof /sapmnt No processes should be found.6. e. 2. Check GPFS cluster health Before performing any updates on any node. If that happens use 1 # lsof /sapmnt to find processes still accessing the file system. Get a list of all installed GPFS 3.1 To minimize downtime distribute the GPFS 4.2 Rolling upgrade per node from GPFS 3. but make sure that all other disks are up.1 Change to the directory where you extracted the GPFS 4. 3. Shutdown GPFS processes on the node to be upgraded 1 # mmshutdown GPFS unloads its kernel modules during its shutdown. Warning If disks of more than one server node are down. If any processes are found please retry stopping SAP HANA and all other processes accessing /sapmnt.1 packages.1 packages on all nodes before starting. rpm gpfs.noarch.1 PTF 8: This is just an example.gplbin package if you have that installed.0-0.1.0-8.noarch. Verify GPFS disks are active again (this command can be executed on any node) 1 # mmlsdisk sapmntdata -e If any disks are shown as down.rpm gpfs.1.update.ext-4.ext-4. the GPFS compatibility layer must be recompiled: 1 2 3 4 5 # cd /usr/lpp/mmfs/src/ # make Autoconfig # make World # make InstallImages (optional) # make rpm 6.9.base gpfs.Technical Documentation Remove all GPFS 3.gpl gpfs.1 packages: 1 2 3 4 5 6 # # # # # # rpm rpm rpm rpm rpm rpm -ivh -ivh -ivh -ivh -ivh -ivh gpfs. Restart GPFS 1 # mmstartup Verify that the node started up correctly 1 # mmgetstate During the startup phase the node is shown in state arbitrating for a short period of time.rpm gpfs.rpm gpfs.1.1.docs gpfs.msg.rpm gpfs.rpm gpfs.noarch.x86_64.docs-4.gskit-8. 1 2 3 4 5 # # # # # rpm rpm rpm rpm rpm -Uvh -Uvh -Uvh -Uvh -Uvh gpfs.0.0-8.0-8.rpm gpfs.gpl-4.0-0.noarch. Start SAP HANA 1 # service sapinit start 9.update.msg. restart them with the command 1 # mmchdisk sapmntdata start -a If disks are suspended you can resume them all with the command X6 Implementation Guide 1.x86_64.x86_64.base-4.x86_64.1.1.en_US-4.1. Mount file system 1 # mmmount sapmntdata 8.en_US-4.msg.x86_64.0-0.0-0. Install GPFS 4. Please update to the PTF recommended at this point in time.noarch.0-8.1.1.0-8.gpl-4.en_US Optionally.rpm Afterwards.base-4.rpm gpfs.0-0. 2015 175 .5 packages returned from above command 1 # rpm -e gpfs. also remove a gpfs.docs-4.1. 7. This changes to active when GPFS completed startup successfully.rpm Update to GPFS 4.50-16.noarch.rpm gpfs.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 4_rhel6_x86-64. a reboot will be necessary. 2015 176 . After a successful upgrade. Restore correct replication level (this command can be executed on any node) Start a restripe so that all data is properly replicated again 1 # mmrestripefs sapmntdata -r Warning Do not use the -b parameter! 11.4-1. This packages needs to be copied to all nodes you wish to update.0.0. Restore accurate usage count If a file system was ill-replicated the used block count results from mmcheckquota may not be accurate. X6 Implementation Guide 1.0.9. mlnx-lnvgy_fw_nic_2.bin. Continue on the next node with step 2 of this procedure 12. This will upgrade your driver and firmware of the Mellanox network cards. Active new method of cluster configuration repository (CCR) 1 # mmchcluster --ccr-enable 3./mlnx-lnvgy_fw_nic_2. e. It might be necessary to make the file executable: 1 chmod +x mlnx-lnvgy_fw_nic_2.4-1.g.bin --enable-affinity If this step fails.0.4_rhel6_x86-64. 1.4-1. Please note.bin Then you can start the installation with: 1 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Therefore it is recommended that you run mmcheckquota to restore the accurate usage count after the file system is no longer ill-replicated. You can verify the minimum required GPFS version for a file system with the command 1 13. 10.7 # mmlsfs sapmntdata -V Update Mellanox Network Cards You should have received a binary update package.0.0. that the version number given here might differ. Update the file system’s on disk format to activate new functionality 1 # mmchfs sapmntdata -V full Notice that a successful upgrade of the GPFS on disk format to a newer version will make a downgrade to previous GPFS versions impossible. Update the cluster configuration to the newest version 1 # mmchconfig release=LATEST 2. you may have to install the python-devel package from the official SLES or RHEL repositories.Technical Documentation 1 # mmchdisk sapmntdata resume -a Afterwards check disk status again. 1 # mmcheckquota -a After all nodes have been updated successfully you can update the GPFS cluster configuration and the GPFS "on disk format" (the data structures written to disk) to the newer version. This update is non-disruptive and can be performed while the cluster is active.4_rhel6_ x86-64. Please review the output of the above program for possible errors. 2015 177 . check installed VMs <vim-cmd vmsvc/getallvms> 5. check vswitches <esxcli network vswitch standard list> 6.8 SAP HANA Warning Make sure that the packages listed in Appendix F.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.5U2 For a detailed description for upgrades of VMware vSphere ESXi 5. In this section we describe the update with a reboot of the ESXi host. boot from USB stick Todo after reboot All of the shown commands are for the CLI. 1. reboot 8. choose upgrade 6. check disks <esxcli storage filesystem list> 4.5 to 5. The procedure is described in the vsphere-esxi-vcenterserver-552-upgrade-guide.5U2 ISO 4. An upgrade may fail without them. If your ESXi host is connected to a vCenter Server you can performe an online update of the ESXi host. confirm upgrade 7. check license <vim-cmd vimsvc/license –show> 2. Shutdown all running VMs 2.Technical Documentation 13. You need to be able to log into the IMM and start a remote console.5: FAQ #5: Missing RPMs on page 217 are installed on your appliance.5 to 5.9.9 Upgrade VMware ESXi 5.5U2 at the remote console. check RAID controller <storcli /call show> X6 Implementation Guide 1. Please refer to the official SAP HANA documentation for further steps. boot from the ESXi 5.5U2 please consult the VMware vsphere-esxi-vcenter-server-552-upgrade-guide. Please do a ssh login as root to the ESXi host to be able to perform the commands 1. Mount the update ISO from ESXi 5. check firewall setting <esxcli network firewall ruleset list> 3. choose the the USB Storage device for your update 5. 13. reboot ESXi host 3. do the steps described in this section only on the server currently being updated. 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 178 . When doing a rolling upgrade or the upgrade of a single node.5 to RHEL 6. Note Testing in a non-productive environment before upgrading productive systems is highly recommended. if there is an operational difference between these two types. taking all nodes down for maintenance.6. do you can perform the steps on all nodes in parallel: step 1 on all nodes. When updating all nodes in a cluster at the same time. As always backing up the system before performing changes is also highly recommended. Install Compability Pack 7. 5. it will be noted. X6 Implementation Guide 1. This list shows the upgrade steps.5 and with at least a standby node. Update Mellanox Drivers 4. 14. 3.3 Upgrade Overview The following tested and recommended upgrade steps require one reboot. Adapt Configuration 9. 14.6. please make sure that this software is compatible with RHEL 6.5 to 6. If you have installed software that was not part of the initial installation from Lenovo. In any case you can perform a non-rolling upgrade. 2. The tasks are mostly the same for cluster and single node systems.9.Technical Documentation 14 Operating System Upgrade This section describes the steps needed to perform an upgrade of RHEL 6.6. Kernel upgrade if necessary 6. Upgrade complete: Start IBM GPFS & HANA.5: Updating GPFS on page 167 for information on the IBM GPFS upgrade. See section 13. Stop IBM GPFS & HANA.6 For the upgrade a maintenance downtime is needed with a least one reboot of the servers. Upgrade IBM GPFS if necessary. 14.5 to RHEL 6. Upgrade from RHEL 6. then step 2 on all nodes and then step 3 on all nodes and so on. Recompile kernel module for IBM GPFS 8.2 Rolling Upgrade In a cluster environment a rolling upgrade (one node at a time) is possible as long as you are running a HA environment with IBM GPFS 3.1 Upgrade RHEL 6. 5 Shutting down services 1.g. see 13. Shutdown HANA Shutdown HANA and all other SAP software running in the whole cluster or on the single node cleanly.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. locally.4-1. SFTP.14.0-8. If your system is running a IBM GPFS version below that.Technical Documentation 14.6.6 version 2. You can find out your IBM GPFS version with the command 1 # rpm -q gpfs.el6. 14. If your system is running a IBM GPFS version below that.0-8. Unmount the IBM GPFS file system Unmount the IBM GPFS file system /sapmnt by issuing 1 # mmumount all 3.6 compability pack Other ways of providing the images to the Server (e.6 Upgrade of IBM GPFS You should run at least IBM GPFS version 4. X6 Implementation Guide 1. you should upgrade IBM GPFS first.1. Shutdown IBM GPFS 1 # mmshutdown -a to shutdown the IBM GPFS software on all cluster nodes.g.i686 e.0 of the Mellanox-Drivers is needed.1.5: Updating GPFS on page 167. you should upgrade IBM GPFS. Login in as root on each node and execute 1 # service sapinit stop Make sure no process has files open on /sapmnt.3-19.6-DVD • nss-softokn packages – nss-softokn-freebl-3. 14. you can test that with the command: 1 # lsof /sapmnt 2.14.3-19. etc) are possible but not explained as part of this guide. You can check the version using: 1 # ethtool -i eth0 For the Upgrade the following DVDs or images are needed: • RHEL 6.4 Prerequisites You are running Lenovo Systems Solution for SAP HANA appliance system and want to to upgrade the RHEL 6. Also other upgrade mechanism like e.0.x86_64 – nss-softokn-freebl-3.el6.5 operating system to RHEL 6.base For RHEL 6. FTP. You should run at least IBM GPFS version 4. using a satellite-server are out of scope of this guide.g as part of the RHEL 6.9. 2015 179 . 32-431.sap. 2015 180 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. repo in /etc/yum.6.5 (end) changed to 1 2 # Keep packages for RHEL 6.d 1 # vi /etc/yum.Technical Documentation 14.32-431.4-1.32-431.5 (begin) # Keep packages for RHEL 6.6.6 at least version 2.* kernel-headers-2.0. 14.6.list 2 3 4 5 6 7 8 9 10 11 # Keep packages for RHEL 6.6\ Server.6 DVD Check where the RHEL 6.5 to RHEL 6.* redhat-release-* # Keep packages for RHEL 6. you should upgrade the Mellanox drivers first.* kernel-devel-2.d/versionlock. Now create a repository file rhel-dvd66.5 (end) 2. Upgrade to RHEL 6.32-431. list if this file exists: 1 # vi /etc/yum/pluginconf.wdf.4. Create a repository from your RHEL 6.6 1 # yum update --enablerepo=dvd66 Check.el6.6.7 Update Mellanox Drivers For RHEL 6.2-1.x86_64 This information is needed for the baseurl-part below.corp/sap/support/notes/2013638 libssh2-1. you have to delete all lines containing restrictions in the file versionlock.* kernel-firmware-2.repo with the following content: 1 2 3 4 5 [dvd66] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/RHEL-6.6 DVD is mounted: 1 2 # ls /media/ RHEL-6.x86_64/ gpgcheck=0 enabled=0 3. if the upgrade was successful: 1 2 # cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.9.7: Update Mellanox Network Cards on page 176.6 (Santiago) X6 Implementation Guide 1.6 To allow these updates.repos.6 Server.repos.d/rhel-dvd66.8 Upgrading Red Hat 1. If you have a version below that. see 13.x86_64 kernel-2.0 of the Mellanox-Drivers is needed.5 (begin) # https://css. Allow updates from RHEL 6. rpm 14. 2015 181 .6 to higher versions If the file /etc/yum/pluginconf.* redhat-release-* # Keep packages for RHEL 6.4.32-504* kernel-firmware-2.d/versionlock.x86_64 kernel-2.2: RHEL Kernel Update Methods on page 166. 14.4. To compile IBM GPFS kernel module execute the following commands 1 2 3 4 # # # # cd /usr/lpp/mmfs/src make Autoconfig make World make InstallImages 14.10 Update of nss-softokn packages A update of the nss-softokn packages is mandatory.com/solutions/1236813 X6 Implementation Guide 1.redhat.d/versionlock.9.6.32-504.list 4 5 6 7 8 9 10 11 12 13 # Keep packages for RHEL 6.2-1.3-19*.corp/sap/support/notes/2136965 libssh2-1.32-504.6 (end) 14.list existed in step one. Prevent further upgrade from RHEL 6.6. you have just to do the following changes. please install also the package yum-versionlock. 30 https://access.Technical Documentation 4.6.6 (begin) # https://css.wdf.12 Adapting Configuration Please review the performance settings in D: Performance Settings on page 211 because they might have changed. If not.el6.* kernel-devel-2. Please check also chapter 13.14. More information can be found in: • SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 • Why can I not install or start SAP HANA after a system upgrade?30 1 yum -y install [path to packages]/nss-softokn-freebl-3.9 Mandatory Kernel Update Please consult SAP if there is now a higher version of the kernel recommended.11 Recompile Linux Kernel Modules IBM GPFS need self-compiled (so called "out-of-tree" drivers) Linux kernel modules to operate properly.sap.6.* kernel-headers-2.32-504.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 1 yum -y install yum-versionlock yum-security --enablerepo=dvd66 2 3 vi /etc/yum/pluginconf. 9.Technical Documentation 14. Start HANA 1 # service sapinit start X6 Implementation Guide 1. 2015 182 .13 Start IBM GPFS and HANA Start IBM GPFS and HANA by either rebooting the machine (recommended) or starting the daemons manually: 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Restart GPFS 1 2 # mmstartup # mmmount all Verify status of IBM GPFS and if the file system is mounted: 1 2 # mmgetstate # mmlsmount all 2. this means that the information is not real time and the system status may have changed in the meantime. 2015 183 . you can see the various ways to call the saphana-support-lenovo. 15. This will pop up once each login for every user.Technical Documentation 15 System Check and Support This chapter describes different steps to check the appliance’s health status. Using the option -h. Listing 1: SSH login screen 15.sap.sh script.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 1 2 # saphana-support-lenovo. We highly recommend that a SAP HANA system administrator regularly downloads and updates these scripts to ensure to obtain the latest support information for the servers.com/sap/support/notes/1650046 9 10 11 12 13 _Regularly_ check the system health! ________________________________________________________________________________ 14 15 16 17 18 ! ! ! ! INFO: Last hourly update on Mon Jun 22 14:45:02 CEST 2015. and memory usage. Note SAP Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances provides details for downloading and using the following scripts to catalog the hardware and software configurations and create a set of information to assist service and support of the machine by SAP and Lenovo.2 Basic System Check Included with the installation is a script that will inform you and the customer that all the hardware requirements and basic operating system requirements have been met.sh -h Usage: saphana-support-lenovo [OPTIONS] X6 Implementation Guide 1. The other sections present additional information and give deeper insight into the system.1 System Login The latest version of the Lenovo Solution installation also adds a message of the day that shows the current status of the GPFS filesystems. NOTICE: Memory usage is 3%. ____ _ ____ / ___| / \ | _ \ \___ \ / _ \ | |_) | ___) / ___ \| __/ |____/_/ \_\_| 1 2 3 4 5 _ _ _ _ _ _ | | | | / \ | \ | | / \ | |_| | / _ \ | \| | / _ \ | _ |/ ___ \| |\ |/ ___ \ |_| |_/_/ \_\_| \_/_/ \_\ 6 Lenovo Systems Solution for SAP HANA appliance 7 8 See SAP Note 1650046 for maintenance and administration information: https://service. The message is created by a cron job that runs once an hour. You can find it in SAP Note 1661146 – Lenovo/IBM Check Tool for SAP HANA appliances. The script described here should be updated and executed in regular intervals by a system administrator. Note It is highly recommended to work with the latest version of the system check script. NOTICE: All GPFS NSDs up and ready.9. NOTICE: All quota usages below 90%. Listing 2: Support script usage An output similar to the following should be reported when you use the options -c (check.2015-06-15 # (C) Copyright IBM Corporation 2011-2014 # (C) Copyright Lenovo 2015 # Analysis taken on: 20150622-1522 =================================================================== 8 9 10 11 ------------------------------------------------------------------Lenovo Systems solution for SAP HANA appliance Hardware Analysis ------------------------------------------------------------------- 12 13 14 15 Machine analysis for IBM x3850 X6 -[6241FT1].2b5da57 -. May impact HANA performance during check.96-13.Model "AC34S1024" OK ------------------------------------------------------------------- 16 17 18 19 20 21 22 Appliance Solution analysis: ---------Information from /etc/lenovo/appliance-version: Lenovo System x3850 X6: Workload Optimized System for SAP HANA Model AC34S1024 Installed appliance version: 1.96-13.9.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.) Print this information 11 12 13 14 15 16 17 Check extensions (only valid in conjunction with -c) -v Verbose. which is the default option). please first try this again with the option -v (verbose) and then open with the customer an SAP OSS customer message with the output from the -s (support) option in Section 15..2b5da57 Installed on: Mon Jun 22 15:17:04 CEST 2015 23 24 25 26 Operating System SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 3 X6 Implementation Guide 1. If for any reason you receive warnings or errors that you do not understand.sh -c =================================================================== # LENOVO SUPPORT TOOL Version 1. Do not hide messages during check..Technical Documentation 3 4 Lenovo Systems solution for SAP HANA appliance System Checking Tool to check hardware system configuration for Lenovo and SAP Support teams. 2015 184 . Print out the support information for SAP support. Recommended after installation.2406....[. Implies -v.3: System Support on page 186 attached. 18 19 20 If using the Advanced Settings Utility (ASU) from a Virtual Machine -i host The host name of the Integrated Management Module (IMM) 21 22 Report bugs to <sapsolutions@lenovo. default)..9.] Lenovo Systems solution for SAP HANA .2406. (-s replaces the --support option. -e Do exhaustive testing with longer running tests.com>. 1 2 3 4 5 6 7 # saphana-support-lenovo. 5 6 7 8 Options: -c -s 9 10 -h Check system (no log file. . linuxx86_64.x615 .00.Technical Documentation 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Installation configuration: ---------Parameter clustered is standby Parameter exthostname is .212..213.0-7] Cluster HANAcluster.1. threads: 144 OK 50 51 Memory: 1024 GB / Free Memory: 978 GB OK 52 53 ServeRAID: 2 adapters OK 54 55 56 57 58 IBM General Parallel File System (GPFS): ---------GPFS with replication [4. patch 715. 2015 185 .101 Parameter gpfs_node2 is gpfsnode02 192.gpfsnode01 is active GPFS device /dev/sapmntdata mounted on /sapmnt of size 24566GB 59 60 61 62 63 SAP Host Agent Information ========================== /usr/sap/hostctrl/exe/saphostctrl: 720.212.00 Build 1432206182-1530 Revision 096 is installed OK 77 78 General Health checks: X6 Implementation Guide 1.742.213.168.50GHz OK # of CPUs: 4..9. ←..096.12 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.201 Parameter hana_node2 is hananode02 192. changelist 1540948 68 69 70 71 SAP HANA Instances ================== 72 73 74 75 76 SAP HANA Instance FLO/12 -----------------------SAP HANA 1. Parameter cluster_ha_nodes is 1 Parameter cluster_nr_nodes is 2 Parameter hanainstnr is 12 Parameter hanasid is FLO Parameter hanauid is 1100 Parameter hanagid is 111 Parameter shared_fs_mountpoint is /sapmnt Parameter gpfs_node1 is gpfsnode01 192.168.168..102 Parameter hana_node1 is hananode01 192. 01:27:21) 64 65 66 67 SAP Host Agent known SAP instances ---------------------------------Inst Info : FLO .202 Parameter step is 11 ------------------------------------------------------------------- 45 46 47 48 49 Hardware analysis: ---------CPU Type: Pentium 4 Intel(R) Xeon(R) CPU E7-8890 v3 @ 2. patch 29.→opt (Dec 20 2014. changelist 1546327.168. In order to make this process for all involved easier. See the FAQ section of the Lenovo . no check failed.35. Lenovo.) [ERROR] Upgrade to kernel version 3. When the SAP System Information Tool is placed in /opt/lenovo/ saphana/bin. Do not remove the script saphana-support-ibm. If there is no output. ---------[ERROR] The global_allocation_limit for FLO is not set.3-17. (See FAQ #1. IBM.0. When installing the latest support script version you will get the new script saphana-support-lenovo.sap.sh. you should always direct the customer to open an OSS Message.→#1954788 / 2015-02-11.3-17. (SAP Note #1557506) [ERROR] GLIBC must be updated to version 2. can be found in the SAP OSS Notes 1661146 and 618104 respectively.sh.101-0.3 System Support In case of a problem with the Lenovo Systems Solution for SAP HANA Platform Edition. whether or not it is an obvious problem with the hardware.11.sh -s Note -[1.8.56.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. sh. Lenovo delivers a special program that can gather much of the data necessary in an initial support call. found 2. sh.2 or higher (SAP Note ←. along with the Linux SAP System Information Tool. X6 Implementation Guide 1. If it is determined that there is a problem with an Lenovo Solution. and SAP have an agreement that all problems with the Lenovo Solution are to come first through SAP Support process. To show succeeded checks. then the Lenovo L3 support person will instruct and guide the customer in opening the correct IBM PMR and help ensure that the appropriate attention has been given to the problem. Using this script the customer can help streamline the support process in order to obtain the fastest and most competent support available. the customer should run this command from the shell as follows: 1 # saphana-support-lenovo.11.80-12]: These appliances were shipped with the script /opt/ibm/saphana/bin/ saphana-support-ibm. This script is found in the directory /opt/lenovo/saphana/bin and is called saphana-support-lenovo. add the parameter -v. Recommended on first run.1) ------------------------------------------------------------------E N D O F L E N O V O D A T A A N A L Y S I S ------------------------------------------------------------------Removing support script dump files older than 7 days. This script.54.SAP HANA Operations Guide SAP Note 1661146 found at https://service.com/notes 83 84 85 86 87 88 89 90 91 92 93 Only issues will be shown.1 or higher.Technical Documentation 79 80 81 82 ---------NOTE: The following checks are for known problems of the system. Listing 3: Support script output 15. 2015 186 . In order to collect support data. where there are Lenovo L3 Support members who will help the customer determine what the root cause of the problem is. it will be automatically called from this script and its input will be also collected. 9.ibm. the saphana-support-lenovo. The saphana-support-lenovo.1 Additional Tools for System Checks Lenovo Advanced Settings Utility Note X6 based servers and later technology come preinstalled with this utility.ibm. and S models.60-7]+ With the change to RAID5 based storage configuration.sh script also analyzes the status of the ServeRAID controllers and the controller-internal batteries to check whether the controllers are in a working and performing state.sh script uses the Lenovo Advanced Settings Utility (ASU). installing the MegaCLI Utility is even more important as a HDD/SSD failure is not directly visible with standard GPFS commands until a whole RAID array has failed. When upgrading the tool remove existing binaries from /opt/ibm/ssd_cli/ and/or /opt/lenovo/ssd_ cli/. Download the latest Linux 64-bit RPM from https://www-947. 15.2 ServeRAID StorCLI Utility for Storage Management Note X6 based servers come preinstalled with this utility.6. This check can be enabled via the -e parameter. Therefore.4 15. if it is installed. Before upgrading the StorCLI tool remove the old version.4.4.ibm. and prints out warnings. Copy the bin file into /opt/lenovo/ssd_cli/: X6 Implementation Guide 1.com/support/entry/myportal/docdisplay?lndocid= migr-5092950 and download the file locally and install the RPMs. Go to https://www-947. This includes all x3850 X6 and x3890 X6 servers.4. Find the installed version via rpm -qa | grep asu. XS.com/support/entry/portal/docdisplay?lndocid=MIGR-5090923 and download the latest binary of the SSD Wear Gauge CLI utility (lnvgy_utl_ssd_-<version>_linux_32-64.Technical Documentation 15.com/support/entry/myportal/ docdisplay?lndocid=LNVO-ASU and install the RPM.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Find the installed version via rpm -qa | grep storcli. and eX5 SSD. Go to http://www-947. For models of the Lenovo Solution that come with SSDs it might be useful to check the state of the SSDs. Before upgrading the ASU tool remove the old version. 15. bin). For activation of this feature the StorCLI (Command Line) Utility for Storage Management software must be installed. Copy it to the machine to be checked. 2015 187 . Warning [1. if there is a misconfiguration.3 SSD Wear Gauge CLI utility Note X6 based servers come preinstalled with this utility. In some cases it might be useful to check the UEFI settings of the HANA servers. Control the software status: Execute saphana-support-ibm.. FW:.Technical Documentation 1 2 3 # mkdir -p /opt/lenovo/ssd_cli/ # cp lnvgy_utl_ssd_-*_linux_32-64.com/support/entry/portal/docdisplay?lndocid=MIGR-5087035 X6 Implementation Guide 1.9.5 Getting Support (IBM PMR.. Percentage of cell erase cycles remaining: 100% Percentage of remaining spare cells: 100% Life Remaining Gauge: 100% 15.ibm... 2. Check for hardware failure: The server’s IMM will report hardware incidents.. SN:...... SAP OSS) In case of a failure follow these instructions: 1.sh -cv with the latest version of the support script (see section 15: System Check and Support on page 183).... • Try to apply suggested solutions by the support script and the Operations Guide.. You may also control the IMM’s Virtual Light Path or the LEDs on the physical server.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo... section Getting help and technical assistance for more information.-... Consult the Lenovo SAP HANA Appliance Operations Guide 31 .. The script will check for common root causes of failures. See the Quick Start Guide32 . 2015 188 .bin /opt/lenovo/ssd_cli/ # chmod u+x /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64. 3. If you could not determine the root cause of the failure or there is no solution provided by the support script or the Operations Guide..bin -u Sample output: 1 2 3 4 1 PN:..bin Execute the binary: 1 # /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64... • If only a hardware replacement is necessary. 31 SAP Note 1650046 (SAP Service Marketplace ID required) 32 http://www-947. take the according steps with IBM. open an SAP OSS ticket.. • System x3850/x3950 X6 Workload Optimized System (6241) for SAP HANA Platform Edition.Technical Documentation 16 Backup and Restore of the Primary Partition This section provides enough instructions necessary to create a simple system copy of the base operating system found on the first hard drive. The following System x server models can be used: • System x3950 eX5 Workload Optimized System (7143) for SAP HANA Platform Edition. 2015 189 . This image can then be used for a basic backup/ restore solution of the primary partition. The intent of this section is that the user can have a simple backup and restore solution using the tools available within Linux to protect their system. you are able to intelligently copy a file system from one partition to another quickly and with little effort. • System x3850/x3950 X6 Workload Optimized System (3837) for SAP HANA Platform Edition. configuration. This image. X6 Implementation Guide 1. you need to run a few commands in Linux in order to set up a working copy of the OS. should also be transferred to offline storage to ensure that data does not get lost due to irreparable hard drive failures. the general concept is that the user uses rsync to copy the contents of the root (/) and boot (/boot/efi) directories from their original partitions onto two newly created partitions on the same hard drive. As seen in 86: Overview of Backup/Restore Operations on page 190. or primary partition. data or logs. Earlier Systems may require extra effort for OS backup partition creation.80-10 of the System x automated installer. • System x3690 eX5 Workload Optimized System (7147) for SAP HANA Platform Edition.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. We also describe how to restore these items in case of a planned or unplanned disaster with the original Operating System (OS). how to create a backup of the operating system.8. What follows is a description. For enterprise backup and restore solutions. What we will explain here is a method of copying the Linux file system that is contained on two partitions of the first hard drive. excluding the SAP HANA executables. once copied initially.9. This is valid for systems installed with at least the version 1. This tool can also be set up in nightly cron schedules to happen automatically and semi-automate the process of taking a backup image of the OS. Using the Linux command rsync. we recommend to use an enterprise backup and restore option to ensure backup/restore operations for the operating system as well as the IBM General Parallel File System and SAP HANA file systems.1 Description In order to perform a simple backup and restoration of the OS. Warning Do not go into production without verifying a full backup and a full restore of the operating system! 16. Technical Documentation rsync /dev/sda5 (backroot) /dev/sda4 (backboot) /dev/sda2 (hanaroot) /dev/sda2 (hanaroot) /dev/sda3 (hanaboot) /dev/sda3 (hanaboot) Normal Operation /dev/sda4 (backboot) Restore: Boot Backup Partition /dev/sda5 (backroot) rsync Return to Normal Operation Figure 86: Overview of Backup/Restore Operations This is not highly available due to the possibility of a hard drive failure of the device used for both the primary and backup partitions, yet it does provide the reliability of a stable and usable backup method. In order to obtain high availability of the backed-up image, we strongly recommend to copy the images saved to the local partitions onto another external storage system. With the rsync command, it is possible to take these snapshots over the network which can improve the availability of the saved image. This document will not cover that yet you may refer to the rsync man page for more details. 16.1.1 Boot Loader The server can use two different methods to boot. For X6 based systems, the default method is using the Unified Extensible Firmware Interface, or UEFI. According to Wikipedia33 , the Unified Extensible Firmware Interface is a specification that defines a software interface between an operating system and platform firmware. The second method is using the legacy method of BIOS, which was a typical way to boot SAP HANA on eX5 based systems. Linux requires a boot loader that understands the specific boot method. Two options are available: Grand Unified Bootloader (GRUB) and Linux Loader (LILO) . The way a server boots; and, subsequently, installs the boot loader determines some of the system partitioning and file system layout of the installed server. Although it is possible to use both methods to boot and install the Lenovo Solution server, this document will only cover the steps necessary to create a restore image using the UEFI boot mechanism with either the GRUB or LILO boot loader. If you are using the Legacy Boot option, you will need to become familiar with how each distribution handles the boot procedure with the Legacy BIOS boot option as this is not part of this documentation. EFI Linux Loader (ELILO) is the interface the Lenovo System x UEFI uses to talk to the LILO boot loader. The Linux installation will place the boot loader under the directory /boot/efi. The configuration file for ELILO can be found in /etc/elilo.conf. Using GRUB, the Linux installation will place the boot loader under the directory /boot/efi. The configuration file for GRUB can be found in /boot/grub/menu.conf or /boot/grub/grub.conf, depending on the version of GRUB 33 http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 190 Technical Documentation 16.1.2 Drive Partitions Starting with version 1.8.80-10 of the the Lenovo Solution installation media, it will create five (5) partitions on the first drive (sda). Each partition has a specific label and purpose for the system backup and restore. The labels are: hanaboot, hanaroot, hanaswap, backboot and backroot. The correlation of these labels to the appropriate devices can be found by listing the symbolic links in the directory /dev/disk/by-label. An example partition layout for systems is shown below. The first device is partitioned into several physical and logical partitions and named with a label, a simple identifier, and a Universally Unique Identifier (UUID) . Only the UUID is promised to remain connected to the proper partition as it was created. Partition /dev/disk/by-label /dev/disk/by-id /dev/disk/by-uuid /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 hanaboot hanaroot swap backboot backroot scsi-{33-hexadecimal-number}-part1 scsi-{33-hexadecimal-number}-part2 scsi-{33-hexadecimal-number}-part3 scsi-{33-hexadecimal-number}-part4 scsi-{33-hexadecimal-number}-part5 hexadecimal hexadecimal hexadecimal hexadecimal hexadecimal number number number number number Attention Pay special attention on systems installed earlier than version 1.8.80-10. These systems may have been installed with extra partitions that are used for other auxillary file systems unrelated to SAP HANA. If this is the case, then you should be certain to first create enough free space in order to create new backup partitions and also determine a way to backup and save off the data in these auxillary partitions. The backup and recovery of these drives is not part of this document, but similar rules can be applied. 16.2 Prerequisites The Lenovo Solution server should also have been installed using the included automatic installer program. If not, some of the names of the partitions might be different and these directions may not work correctly. 16.2.0.1 SUSE Linux Enterprise Server Partition Labels In a system installed with the SUSE Linux Enterprise Server OS, not all partitions are labeled. This seems to be an issue with how SLES handles the creation of labels for VFAT file system partitions. By default, SLES uses the values found under the /dev/disk/by-id directory when describing specific partitions. This document will continue to use the /dev/disk/by-label values, and it will be expected that these are translated to /dev/disk/ by-id values when implementing this backup solution on SLES. 16.2.0.2 Create entries in /etc/fstab for new mounts Before you start with the OS portion of this procedure, you should ensure that the backboot and backroot devices are mounted to the file system as /var/backup/root and /var/backup/boot/efi. These mount points should already exist in the file /etc/fstab similar to the example (for SLES) below: 1 2 3 ## Sample SLES entries for HANA System Backup /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /var/backup/boot/efi /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /var/backup/root vfat umask=0002,utf8=true ext3 acl,user_xattr 0 0 1 1 Listing 4: Example SUSE fstab entries X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 191 Technical Documentation Note The hexadecimal portion of the value of /dev/disk/by-id/ scsi-3600605b0038ac2601a9a1f01cc74cf23-partx will be different for every individual drive and installation. We recommend to read the contents of /etc/fstab before and copy only the value for the stated partitions for all new backup partitions. Pay particular notice to rename the partition to the correct partition created! 1 2 3 ## Sample RHEL entries for HANA System Backup UUID=c605201a-04bc-47a8-bbc4-b6808ee98fe1 /var/backup/root ext4 defaults UUID=FF50-7B37 /var/backup/boot/efi vfat umask=0077,shortname=winnt 1 2 0 0 Listing 5: Example Red Hat fstab entries Note The hexadecimal portion of the value of /dev/disk/by-uuid/ will be different for every individual drive and installation. We recommend to read the contents of /etc/fstab before and copy only the value for the stated partitions for all new backup partitions. Pay particular notice to rename the partition to the correct partition created! 16.2.1 Correcting the backup fstab After each time the rsync command has completed, the root file system has now been copied exactly from / into /var/backup/. In order to boot from the backup partition backroot, we want to switch the partition labels (or ids) from hana* to the back* labelled partitions. The hana* partitions should now be mounted as the file system /var/backup in order to restore from the backed up image in the case of a recovery. We recommend to slightly modify the message of the day (motd) so that you can visually see that you are using the backup image. Since this is also copied on top of any previous images, it is best to use a symbolic link to keep both the backup and original motd file. 1 2 3 4 5 touch /etc/motd.{bak,orig} echo "## !!!!! T H E B A C K U P M E S S A G E !!!!! ##" > /etc/motd.bak cat /etc/motd >> /etc/motd.{bak,orig} rm /etc/motd ln -s /etc/motd.orig /etc/motd Listing 6: Creating a copy of the motd file After every rsync run, the fstab needs to be adopted as shown here. We recommend to create a copy of the origial and backup so that you can easily switch between the two after a call to rsync. You can copy the original file /etc/fstab to /etc/fstab.orig and create a new copy called /etc/fstab.bak. 1 2 3 4 5 touch /etc/fstab.{bak,orig} echo "## !!!!! T H E B A C K U P F S T A B cat /etc/fstab >> /etc/fstab.{bak,orig} rm /etc/fstab ln -s /etc/fstab.orig /etc/fstab F I L E !!!!! ##" > /etc/fstab.bak Listing 7: Example SLES primary fstab file Thereafter, you can change the SUSE Linux Enterprise Server entries in /etc/fstab.bak to: 1 2 3 4 5 ## Adding entries for HANA System Backup /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 X6 Implementation Guide 1.9.96-13 /var/backup /var/backup/boot/efi swap /boot/efi ext3 vfat swap vfat acl,user_xattr umask=0002,utf8=true defaults umask=0002,utf8=true Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 1 0 0 0 1 0 0 0 192 Technical Documentation 6 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 / ext3 acl,user_xattr 1 1 Listing 8: Example SLES backup fstab file Notice that the entries /dev/disk/by-id will be different on your system. The mountpoints need to be changed as shown above. On Red Hat Enterprise Linux the entries in /etc/fstab.bak should be changed to: 1 2 3 4 5 6 ## Adding entries for HANA System Backup LABEL=backroot / ext3 LABEL=backboot /boot/efi vfat LABEL=hanaswap swap swap LABEL=hanaroot /var/backup ext3 LABEL=hanaboot /var/backup/boot/efi vfat acl,user_xattr umask=0002,utf8=true defaults acl,user_xattr umask=0002,utf8=true 1 0 0 1 0 1 0 0 1 0 Listing 9: Example RHEL backup fstab file Notice that only the labels have changed. After these files have been created, you will also need to recreate a symbolic link under the /var/backup directory to point the files to their backup representations as follows: 1 2 3 rm /var/backup/etc/fstab cd /var/backup/etc ln -s fstab.bak fstab Listing 10: Changing files for backup partition 16.2.2 Add boot loader entry for backup partition ELILO installed systems After the fstab file has been modified, create a backup entry in the ELILO boot menu (/etc/elilo.conf) by copying the whole subsection identified by the label=linux statement. On RHEL replacing the label and root values with the value backup and backroot partition ID. On SLES the according scsi-<id>-part<X> has to be changed to fit the <id> and partition <X> on the given system. It is important to modify the string ###Don’t change this comment - YaST2 identifier: Original name: name### on these installs. Otherwise, YaST will not see this option in the boot list for ELILO and may not present it to you during boot. 1 2 3 4 5 6 7 8 9 ## Adding Restore entry to UEFI Boot menu image = vmlinuz-3.0.76-0.11-default ###Don't change this comment - YaST2 identifier: Original name: backup### label = backup append = "resume=/dev/sda3 splash=silent transparent_hugepage=never intel_idle.max_cstate=0 processor.max_cstate=0 showopts " description = "Backup of SAP HANA Platform Edition Image" initrd = initrd-3.0.76-0.11-default root = /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 Listing 11: Example UEFI Configuration for Primary Partition X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 193 UTF-8 crashkernel=auto processor. while the syntax (hd0.6.max_cstate=0 intel_idle. or /boot/grub/menu. X6 Implementation Guide 1.el6.el6. we exchange the meanings of the hanaroot and backroot partitions.<PARTITION NR. and copy the section for the primary partition to edit it as the new backup partition. Note In our example. and the backroot is the default partition to be booted. Notice that GRUB identifies the first partition on the first device as (hd0.partmap-name2part-num2[. The syntax (hd0) represents using the entire disk of the first device.img Listing 12: Example GRUB Configuration for Primary Partition On SLES use the command: 1 yast2 bootloader to update the boot loader. we presume that the hanaroot partition is (hd0.lst (SLES). root and kernel lines are changed to match the partition used for the backroot partition. Grub installed systems In systems installed using the GRUB boot loader (by default all Red Hat based installs and SUSE installs on System eX5 hardware).conf.conf run 1 elilo --verbose to update the boot loader. 2015 194 .x86_64.]]]).32-431.9.4). you will also need to update the lines image = and initrd = in this file for the backup entry.. see below>) kernel /boot/vmlinuz-2. root and kernel lines changes to match the partition used for the backup partition.. The title.cfg (RHEL). on RHEL: 1 grub-install /dev/sda Note The partition number for a GRUB installed partition is based on the device syntax of (device[.1) and the backroot partition is (hd0. Here. When booting into this kernel.0) or (hd0) for short. After changing the elilo.Technical Documentation If you update the kernel.. edit the contents of /boot/grub/grub. On SLES the according scsi-<id>-part<X> has to be changed to fit the <id> and partition <X> on the given system.1) represents using the second partition of the device.x86_64 ro root=LABEL=backroot KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US. the hanaroot is the partition to be restored.On RHEL replacing the label and root values with the value backup and backroot partition ID.32-431. The intention is that you will be able to start up the backup partition in order to copy the saved state in the backup partition over top of the primary partition. 1 2 3 4 5 6 7 8 title Backup of SAP HANA Platform Edition Image root (hd0. for example sda2. for example sda.max_cstate=0 transparent_hugepage=never SYSFONT=latarcyrheb-sun16 rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet initrd /boot/initramfs-2.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Append or change these lines in /var/backup/etc/grub. We should also change the parameter default in the header subsection to point to the Restore image (usually the subsection number 2) rather than the original SAP HANA image. This is a copy of the default boot line with the title.6.partmap-name1part-num1[. 2015 195 . While booting. You can use the arrow keys to select the newly created option "backup".img Listing 13: Example GRUB Configuration for Backup Partition 16.el6.2.1: Correcting the backup fstab on page 192 above./var/backup/*.Technical Documentation 1 2 3 4 5 6 7 8 9 default=2 title Restore from SAP HANA Platform Edition Backup Image root (hd0..\ /lost+found.max_cstate=0 intel_idle./run/*. Restart the machine and boot the backup OS. The newly created option of "backup" should be visible.x86_64 ro root=LABEL=backroot KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.4 Restoring the operating system In case of a planned or unplanned system outage. By default there is no menu congigured.9. In the case of a hard drive failure where the backup partitions have been lost. the restore can take place as described here.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.32-431. Subsequent executions of the rsync command will be shorter as it is intelligent enough to only copy what has changed between calls of the command. As the system administrator (root) run: 1 2 3 4 5 6 7 8 9 10 11 12 start_stamp=$(date +%s) # Begin backup of root file system rsync -aAXxv --delete / /var/backup --exclude={/dev/*. select the created boot option for the backup partition from the list given by the ELILO boot loader menu.. The initial backup will take a long time as it is copying the entire file system under the hanaroot partition into the backroot partition. the wish to recover the last known good backup of the root and boot file system partitions that have been copied on to the backup partitions is possible./sys/*. rerun the elilo –verbose command in the original OS and restart./var/lib/ntp/proc/*./tmp/*.6.el6. you will be given the options you can choose. After that.32-431./boot/efi*. X6 Implementation Guide 1./media/*.max_cstate=0 transparent_hugepage=never SYSFONT=latarcyrheb-sun16 rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet initrd /boot/initramfs-2./proc/*.x86_64. The GRUB boot loader menu is shown by default (see 87: Sample GRUB boot loader screen on page 196.<PARTITION NR. If not.UTF-8 crashkernel=auto processor. the copies stored on an external storage must be recopied into the backup partitions after the hard drive failure has been resolved by the hardware support team. This should be done only after checking that the boot loader menu in the backup partition has been properly updated according to the directions in 16./etc/fstab} middle_stamp=$(date +%s) echo "Root file system completed in $( echo \ "(${middle_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${middle_stamp}-${start_stamp})%60"| bc ) seconds" # Begin backup of /boot/efi file system rsync -aAXxv --delete /boot/efi/ /var/backup/boot/efi/ end_stamp=$(date +%s) echo "Boot file system completed in $( echo \ "(${end_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds" Listing 14: Example rsync command 16.6. but if you press the TAB key while you see the text ELILO Booting:. see above>) kernel /boot/vmlinuz-2./sapmnt/*./mnt/*.3 Backup of the Linux operating system In order to perform an initial backup run as root the following commands. /run/*.9. 1 2 3 4 5 6 ## Adding entries for HANA System Backup /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /boot/efi / swap /var/backup/boot/efi /var/backup/root vfat ext3 swap vfat ext3 umask=0002./boot/efi*.utf8=true acl. 2015 196 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. X6 Implementation Guide 1.utf8=true LABEL=backroot /var/backup/root ext3 acl. you should now be able to boot into the primary partition using the boot loaders default menu item. run the following command to transfer the backup to the original root partition: 1 2 3 4 5 6 7 8 9 10 11 # Begin restore of root file system rsync -aAXxv --delete / /var/backup --exclude={/dev/*. Ensure that they have the reverse meaning to that described in the previous section./sys/*.3: Backup of the Linux operating system on page 195: Swap the mountpoints of \ and \boot\efi with the original root partition in the /var/backup/etc/ fstab.\ /lost+found.utf8=true acl.user_xattr Listing 17: Example RHEL primary fstab file 0 1 0 0 1 0 1 0 0 1 Warning Be careful after using the rsync command to pay attention to the files /var/backup/etc/ fstab and the boot loaders /var/backup/boot/grub/grub. conf.user_xattr 0 1 0 0 1 0 1 0 0 1 Listing 16: Example SLES primary fstab file 1 2 3 4 5 6 ## Adding entries for HANA System Backup LABEL=hanaboot /boot/efi vfat umask=0002.Technical Documentation Figure 87: Sample GRUB boot loader screen Once the backup partition is booted./etc/fstab} middle_stamp=$(date +%s) echo "Root file system completed in $( echo \ "(${middle_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${middle_stamp}-${start_stamp})%60"| bc ) seconds" # Begin restore of /boot/efi file system rsync -aAXxv --delete /boot/efi/ /var/backup/boot/efi/ end_stamp=$(date +%s) echo "Boot file system completed in $( echo \ "(${end_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds" Listing 15: Example rsync command Then you need to revert the changes made in 16./tmp/*.utf8=true LABEL=hanaroot / ext3 acl./sapmnt/*.user_xattr defaults umask=0002./var/lib/ntp/proc/*./mnt/*./var/backup/*.cfg or /var/backup/etc/elilo.user_xattr LABEL=hanaswap swap swap defaults LABEL=backboot /var/backup/boot/efi vfat umask=0002./proc/*. On the primary partition./media/*. ibm.1 Description The procedure to backup SAP HANA and IBM GPFS only applies to SAP HANA 1. For advanced backup and restore solutions. Identical results can be achieved using the command-line SQL interface found in the SAP HANA guide books.com/infocenter/clresctr/vxrx/topic/ com. The intent is that the user can have a simple backup and restore solution using the tools available with IBM GPFS and SAP HANA. Chapter: Backup and Recovery. 17. X6 Implementation Guide 1.doc/bl1adv_logcopy. This enables the administrator to take backups of the SAP HANA data without interrupting the database service (so called online backups of the database). All screenshots were taken with this release.sap. The GUI may change with newer releases.9. – IBM GPFS snapshot http://publib.0 SPS 07 and later. What follows is a description how to take snapshots of the IBM GPFS file system and the SAP HANA database.htmdocumentation. Initially. These instructions are also included in the SAP HANA Operations Guide. • Make sure to always check the following locations for latest information: – http://help.IBMGPFS. The time it takes to actually backup the data afterwards to a secure place does not affect SAP HANA operation. These images can then be used for a basic backup/restore solution. • This procedure can restore data: – on the very same environment the snapshot was taken from.Technical Documentation 17 SAP HANA Backup and Recovery Warning The snapshot restore functionality in SAP HANA Revision 80 is broken.pdfSAP HANA Administration Guide. – http://www. Note Features from SAP HANA Studio for snapshot generation are described as well. We also describe how to restore SAP HANA in case of a planned or unplanned disaster.ibm.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.saphana.2 Backup of SAP HANA Open SAP HANA Studio. we recommend to use an enterprise backup solution to ensure backup/restore operations for IBM GPFS and SAP HANA. they are copied locally and must be transferred to an offline storage for any real use. – on an environment that copies the landscape of the original system.com/hana/SAP_HANA_Administration_Guide_en.boulder.v4r1. 2015 197 .cluster. The described procedure could be done successfully with SAP HANA Revision 91. This section provides instructions necessary to create a simple SAP HANA Platform Edition backup and restore procedure.com/docs/DOC-1220SAP HANA Backup and Recovery Overview.IBMGPFS200. Warning Do not go into production without verifying a full backup and restore procedure! 17. • A change in landscape (m–to–n copy) is not supported. Any log entries are merged into the data area so that it has a consistent state that can be recovered from. You are then asked to give this snapshot a name. This name will be stored in the SAP HANA backup catalog. This allows to generate a snapshot. The following dialog opens: Click on “Prepare”.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. After clicking the OK button the snapshot is generated.Technical Documentation In SAP HANA Studio either right-click on Backup and choose "Manage Storage Snapshots" from the context menu or click on "Storage Snapshots" on the right. 2015 198 .9. X6 Implementation Guide 1. It does not appear outside of SAP HANA. This opens the following dialog: X6 Implementation Guide 1. Snapshot <snapshotname> created with id 2. It does not matter on which server you issue the IBM GPFS snapshot commands. The next step is to take a IBM GPFS snapshot. 2015 199 .9.snapshots This subfolder contains all files that you can then use to copy to a safe place. Login to any server of the SAP HANA installation. Notice file snapshot_databackup_0_1 in /sapmnt/data/<SID>/mnt00001/hdb00001 – this file indicates that the content of this directory is a valid SAP HANA snapshot and can be used to recover from. We recommend to include the current data and/or time in the snapshot name.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. In SAP HANA Studio click on “Confirm”.Technical Documentation While the snapshot is active you can not have further snapshots or backups taken from this SAP HANA instance. If the IBM GPFS snapshot has finished successfully confirm this fact and release the SAP HANA snapshot. You can do this via 1 mmcrsnapshot sapmntdata `date +%F--%T` After this command has finished you have a new folder <snapshotname> in /sapmnt/. 1 2 3 4 5 6 mmcrsnapshot sapmntdata <snapshotname> Writing dirty data to disk Quiescing all file system operations Writing dirty data to disk again Resuming operations. The IBM GPFS snapshot is taken from the entire GPFS file system. 9. E.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Copy the IBM GPFS snapshot data to a safe place on an external storage device. On the other hand. After you acknowledge this window the wizard finishes and you can leave the storage snapshot dialog.g. this could be an NFS export on a storage server. Path Exclude /sapmnt/. This depends highly on the customer demands and availabilities regarding hardware and backup requirements.1). click on the “Abandon” button and act accordingly.snapshot/<snapshotname>/data/ <SID> Table 55: Required SAP HANA directories for restore After data is successfully copied you need to delete the IBM GPFS snapshot: 1 mmdelsnapshot sapmntdata <snapshotname> Having more than one active snapshots at a time is supported by IBM GPFS. this can be done with the following tools: simple Linux copy (cp). See table 55 for the files and directories which need to be copied to an external storage in order to have a full SAP HANA backup. The maximum number of snapshots in sapmntdata is 256 (this applies to IBM GPFS 3. integration into IBM Tivoli Storage Manager or other automated file backup tools is also possible. If the IBM GPFS snapshot did not finish successfully or was manually aborted.5 and 4. For instance. 2015 200 . You can list all existing IBM GPFS snapshots with mmlssnapshot sapmntdata X6 Implementation Guide 1.snapshot/<snapshotname>/shared /sapmnt/.Technical Documentation We recommend to use the name given to the IBM GPFS snapshot as part of the mmcrsnapshot command. secure copy (scp) or rsync command.snapshot/<snapshotname>/shared/ <SID>/HDB<INST_NR>/backup /sapmnt/. 17. Restore with SAP HANA Studio In SAP HANA Studio right-click on the SAP HANA instance you want to recover to and select “Recover”.Technical Documentation However. Either with SAP HANA Studio or with a command line statement. • In case that you want to restore SAP HANA data on an new instance you need to restore the profiles. • Ensure correct file permissions on the snapshot data. Simple tools like cp.9. keep in mind that all IBM GPFS snapshots still remain on the same physical disks as your production SAP HANA data.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The terminal screenshot below visualizes a snapshot (plus subfolders and files) that has been copied back at the correct location. having IBM GPFS snapshots will lead to a slightly decreased file system performance. • Copy from the backup data data/<SID> to /sapmnt/data/<SID> (see screenshot of the terminal window below). X6 Implementation Guide 1. There are two ways to restore the SAP HANA snapshot. This data can then be used for restoration. Therefore it is essential to move and archive such backup to a remote device and to delete the snapshot. The recovery wizard appears.3 Restore of SAP HANA To prepare for a restore: • SAP HANA instance must be stopped. 2015 201 . File owner must be the database administrator. This does by no means represent a valid backup location! Moreover. scp or rsync can be used to copy back the data. If you do not specify a valid key the restore still completes successfully but the database instance will be locked afterwards. 2015 202 . It is possible to specify a valid license key later on. The final screen summarizes the restore parameters.9. If you are restoring to a different system you need to provide a license key.Technical Documentation Specify “Snapshot” as the type of backup to recover from. If you restore on the same system from which the snapshot was taken you can skip the license key question. This disables the location box. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. The file snapshot_databackup_0_1 in /sapmnt/data/<SID>/mnt00001/hdb00001 is automatically removed upon a successful restore.Technical Documentation In the next step. Restore via command line In order to restore the SAP HANA snapshot.9. restore takes places. 2015 203 .nktadm . execute the following commands as <sid>adm: 1 2 su .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo./HDBSettings. X6 Implementation Guide 1.sh recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG" After the restore completes successfully the procedure automatically starts the SAP HANA instance. Restore time depends on the amount of data being recovered and the number of servers involved. ReadAhead.2 GPFS mount points missing after Kernel Update If you updated the Linux kernel. 2015 204 .9. meaning that the data is written to disk instead to the cache. In that case the cache policy is changed from "WriteBack" (default) to WriteThrough.3 Cluster Operations → Adding a cluster node). No No No No Write Write Write Write Cache Cache Cache Cache if if if if Bad Bad Bad Bad BBU BBU BBU BBU If the output contains "WriteThrough" for the "Current Cache Policy" while the previous "Default Cache Policy" defines "WriteBack". Determine current cache policy: 1 # /opt/MegaRAID/storcli/storcli64 -LdPdInfo -aAll | grep "Cache Policy:" 3. please proceed as follows: 1. After a kernel reboot. Direct. install SAP HANA worker and standby nodes as described in the Lenovo SAP HANA Appliance Operations Guide 34 (Section 4.Technical Documentation 18 Troubleshooting For the Lenovo Systems Solution for SAP HANA Platform Edition the installation of SLES for SAP as well as the installation and configuration of IBM GPFS. Cached.4.1 Adding SAP HANA Worker/Standby Nodes in a Cluster When configuring a clustered configuration by hand. For example.3 Degrading disk I/O throughput One possible reason for degrading disk I/O on the HDDs or SSDs could be a discharged or disconnected battery on the RAID controller. You can then check each battery’s status.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Sample output: 1 2 3 4 Default Current Default Current Cache Cache Cache Cache Policy: Policy: Policy: Policy: WriteBack. The commands are the same. WriteBack. ReadAhead. WriteBack. 18. It is no longer supported to install the OS manually for the Lenovo Solution. you will not see the GPFS mount points available. Direct. WriteBack. The StorCLI tool (see section 15. If you have been using the MegaCli64 client before. with the sample output above you would check the status of the first two adapter’s batteries (the third one is OK). 18. The path is /opt/MegaRAID/storcli/. ReadAhead. 2. Depending on the model there is a varying number of output lines. Cached.2: ServeRAID StorCLI Utility for Storage Management on page 187) is installed during HANA setup. ReadAhead. you don’t have to learn new commands. you will have to update the portability layers for GPFS before starting SAP HANA. This process automatically installs and configures the base OS components necessary for the SAP HANA appliance software. To verify. the cache policy has been switched from the "WriteBack" default due to some issue. 34 SAP Note 1650046 (SAP Service Marketplace ID required) X6 Implementation Guide 1. and SAP HANA has been greatly simplified by an installation process with an accompanying guided installation. This will have a significant I/O performance impact. Follow the directions above in section regarding updating both portability layers. 18. 18. 18. a hardware support call with IBM should be opened.e.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. For example to show and subsequently reset the System Product Identifier required by SAP HANA. the master node shows time zone UTC and all other nodes show time zone CET.4: SAP Notes (SAP Service Marketplace ID required) on page 227. If any of the above issues occurs.6.4. X6 Implementation Guide 1. ASU is installed under /opt/lenovo/toolscenter/asu. This chapter is to describe some of these SAP Notes in more detail. i. 2015 205 .1.5 Known Kernel Updates 18. If the output indicates a state of charge that is significantly smaller than 100%. you can use the following commands: 1 # asu64 show SYSTEM_PROD_DATA. SAP HANA hardware checker (before revision 27) looks at the description of the string instead of the MTM. The tool can then be used to view or set the firmware settings of the IMM from the command line.6.1 SAP Note 1641148 HANA server hang caused by GPFS issue https://service. the battery is most likely bad or disconnected and needs to be replaced or reconnected to the adapter. 18. This may be caused by an inconsistency in the installation process and should be corrected. he is required only to reset the Manufacturer Type and Model (MTM) and serial number of the machine inside of the EEPROM Settings. 18.SysInfoProdIdentifier --host <IMM Hostname> (--host can be omitted if the command is run on the actual system) 1 # asu64 set SYSTEM_PROD_DATA.com/sap/support/notes/1641148 18.6.1: Lenovo Advanced Settings Utility on page 187) to reset the system product data to the correct data for the SAP installer to work.sap.2 Reason and Prerequisites Your SAP HANA scale out landscape shows different time zone settings for at least one server.4 SAP HANA will not install after a system board exchange When a IBM Certified Engineer exchanges a system board.Technical Documentation 1 2 # /opt/MegaRAID/MegaCli/storcli64 /c0/bbu show all # /opt/MegaRAID/MegaCli/storcli64 /c1/bbu show all If the output contains "Get BBU Capacity Info Failed".1 Symptom You are running a SAP HANA scale out landscape and see different time zone settings for the sidadm user.9.1. To workaround this issue a Lenovo services person can use the Lenovo Advanced Settings Utility (ASU) tool (see section 15.SysInfoProdIdentifier "System x3850 X6" Then dmidecode should return the correct system name after a system reboot. then the battery is most likely bad and should be replaced.6 Important SAP Notes (SAP Service Marketplace ID required) You can find a list of SAP Notes in Appendix G. on the management node of the appliance. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.e.sapenv. Additionally.Technical Documentation 18. If the time setting is done login as sidadm user again and restart the database. 2015 206 .csh: setenv TZ <time zone> . for a scale out installation a NTP server should be configured.9. If you see quite big values check your NTP and do a re-sync.sapenv. You may either use your corporate NTP or ask your hardware partner to setup a NTP server for you. If you see different time settings for the sidadm and the root user check /etc/adjtime.1.6. i.3 Solution To change the time zone settings of the sidadm user: go to the home directory /usr/sap/ 1 2 .sh: export TZ=<time zone> Make sure this is done for all HANA nodes. 5 Advanced Administration Guide chapter "GPFS File Placement Optimizer". Create the file /var/mmfs/config/disk.5 failure groups) This is currently valid only for the DR-enabled clusters. When having two RAID arrays in SAS expansion unit. for standard HA-enabled clusters use the plain single number failure groups as described in the instructions above.1 (introduced with release 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.data.5 introduced a new disk descriptor format called stanzas. With GPFS 3.5. add also %nsd: device=/dev/sdd nsd=data03node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1001 pool=system 4. When having one RAID array in the SAS expansion unit %nsd: device=/dev/sdc nsd=data02node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1001 pool=system 3. The old disk descriptor format is deprecated since GPFS 3.5 TL2 (the base version for DR) a new failure group (FG) format called "Topology vectors" was introduced which is being used for the DR solution.Technical Documentation Appendices A GPFS Disk Descriptor Files GPFS 3. Always add %nsd: device=/dev/sdb nsd=data01node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1001 pool=system 2.gpfsnode01 by concatenating the following parts: 1. This stanza format is also valid for GPFS 4. Always add these line at the end %pool: pool=system blockSize=1M usage=dataAndMetadata layoutMap=cluster allowWriteAffinity=yes writeAffinityDepth=1 blockGroupFactor=1 B Topology Vectors (GPFS 3. X6 Implementation Guide 1.list.8). A more detailed description for topology vectors can be found in the GPFS 3. 2015 207 .9. 0.0.1.0.1.2 Node gpfsnode01 / hananode01 gpfsnode02 / hananode02 gpfsnode03 / hananode03 gpfsnode04 / hananode04 gpfsnode05 / hananode01 gpfsnode06 / hananode02 gpfsnode07 / hananode03 gpfsnode08 / hananode04 3. storing more information on the infrastructure of the cluster.0.x) Failure group 3 (1.1 2. the topology vector is a replacement for the old FGs. A topology vector consists of three numbers divided by commas.2 1. The first of the three numbers is either 1 or 2 (for all the SAP HANA nodes) or 3 for the tiebreaker node.x) Failure group 2 (2.1.1.0.1 gpfsnode99 Table 56: Topology Vectors in a 8 node DR-cluster X6 Implementation Guide 1. In a standard eight node DR-cluster (4 nodes per site) we would have these topology vectors: Site Site A Site B Site C Failure Group Failure group 1 (1. but as the same topology vector is used for all disks of a server node it will be explained in the context of a server node.0.x) Failure group 5 (tiebreaker) (3. 2015 208 . Topology vectors are used for NSDs. The second number is 0 (zero) for all site A nodes and 1 for all site B nodes.2 2.1 2. In a standard DR cluster setup all nodes are grouped evenly into four FGs (five when using the TiebreakerNode) with two FGs on every site.1 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.1.1. The third number enumerates the nodes in each of the failure groups starting from 1.x) Failure group 4 (2.0.x) Topology Vector 1.1 1.0.2 2.9.Technical Documentation In short. 4 and later. C. For DR the log quota will also be calculated based on the number of active nodes. there is only one HANA database installed. Two examples: • In an eight node cluster. Manual calculation is not recommended. This cluster has no dedicated standby node as the eight node is not "standby only". Please use the new saphana-quota-calculator. 2015 209 . For HANA single nodes and HA-enabled clusters. For clusters this is the count of all cluster nodes which are not dedicated standby nodes. The quota calculation for this and the following appliance releases is more complex than the quota calculations in previous releases. there will be set quotas for HANA log files. e. A dedicated standby node is a node which has no HANA instance running with a configured role of master/slaves.Technical Documentation C Quotas C. In general the quota calculations follows SAP recommendations for HANA 1. The standard installation uses this script to calculate the quotas during installation and the administrator can also call this script to recalculate the quotas after a topology change happened. For a DR solution no reliable guess on the nodes can be made and manual override must be used. In DR-enabled cluster a quota should be set only for SAP HANA’s log files. the last two are installed as standbys.(quota for logs) .g.sh. installation of more HANA instances. The formula for the quota calculation is 1 2 3 quota for logs = (# active Nodes) x 1024 GB quota for data = (# active Nodes) x (RAM per node in GB) x 3 x (Replication factor) quota for shared = (available space) . The first six nodes are installed as worker nodes.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2 for clusters and 3 for DR enabled clusters. in this case as only one HANA cluster is allowed on the DR file system. this number is of course 1. For single nodes. HANA data volumes and for the shared HANA data. A second HANA system QA1 is installed with a worker node on the last (eight) node and a standby node on node seven.(quota for data) The number of active nodes needs explanation. Most values are read from the system or guessed. changing node role.sh X6 Implementation Guide 1. shrinking or growing the cluster. • Another eight node cluster has a HANA system ABC installed with the first seven nodes as workers and the last nodes as a standby node. it’s actually active for the QA1 cluster. Please use the quota calculator in the next section C. The replication factor should be 1 for single nodes.2 Quota Calculation Script A script is available to ease the quota calculation. So this cluster has clearly two dedicated standby nodes. solely on the count of the worker nodes. The basic call is 1 # saphana-quota-calculator. For a cluster the standard assumption is to have one dedicated standby node.1 Quota Calculation Note This section is only for information purposes.9.2: Quota Calculation Script on page 209. An utility script is provided to make the calculation easier. 2015 210 . In the case of a DR enabled cluster. The number of workers and standbys should equal the number of nodes on a site. Additional parameters are -r to get a more detailed report on the quota calculation and -c to verify the currently set quotas (allows a deviation of 10%. After reviewing these you can add the -a parameter to the call which will automatically set the quotas as calculated. Please use also the parameter -w <# workers> to set the number of nodes running HANA as active worker.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.9. 0 is also a valid value.Technical Documentation As a result it will give the calculated quotas and the commands to set them to the calculated result. the guess for the active worker nodes will be always wrong. use the parameter -s <# standby> to set a specific number of standby hosts. In the case you are running a cluster and the number of dedicated standbys is not one. too inaccurate for larger clusters with more than 8 nodes). X6 Implementation Guide 1. 9.[sid]adm hdbparam --paramset fileio.el6. More information is available in SAP Note 1930979 – Alert: Sync/Async read ratio.max_cstate=0 " (b) Grup installed systems (RHEL based systems) Change line 17 in /boot/efi/efi/redhat/grub. which can reduce power consumption but lower performance.→rd_NO_DM intel_idle.→UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ←. SAP HANA HDB Parameters There is a set of parameters for HANA that will adjust the asynchronous I/O to increase performance.32-504.size_kernel_io_queue=1024 X6 Implementation Guide 1.g.6. Change Processor C-State Boot parameter This will disable the use of some processor C-States.→transparent_hugepage=never crashkernel=auto rhgb quiet rhgb quiet 2.→UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ←. This boot parameter should not have any effect on Lenovo solutions. However.max_parallel_io_requests=256 hdbparam --paramset fileio.→intel_idle.6.x86_64 ro root=UUID=3d420911-eef8-46de-←.→b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.32-504. (a) ELILO installed systems (SLES based systems) Change line 12 in /etc/elilo.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.max_cstate=0 processor.x86_64 ro root=UUID=3d420911-eef8-46de-←. 2015 211 . These parameters can be applied by the [sid]adm user only after installing HANA. SAP requires this parameter be set at boot.max_cstate=0 transparent_hugepage=never ←.async_write_submit_active=on hdbparam --paramset fileio. 1 kernel /boot/vmlinuz-2.max_cstate=0 processor.←.conf from e.→crashkernel=auto rhgb quiet rhgb quiet to 1 kernel /boot/vmlinuz-2. 1 2 3 4 su .max_cstate=0 showopts " to 1 append = "resume=/dev/sda3 splash=silent transparent_hugepage=never ←.←.conf from 1 append = "resume=/dev/sda3 splash=silent transparent_hugepage=never ←.async_read_submit=on There are 2 additional parameters that are not available in HANA revision 80 but are available in revisions 90 and above.→intel_idle.el6. 1 2 hdbparam --paramset fileio.Technical Documentation D Performance Settings Please review the following configuration settings if the support script indicates it: 1. as restricting the processor C-state is done in other settings.→b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.max_cstate=0 ←.async_write_submit_blocks=all hdbparam --paramset fileio.→rd_NO_DM intel_idle. d/after.ipv4. then echo $QUEUESIZE > $i/queue/nr_requests echo $QUEUEDEPTH > $i/device/queuedepth echo noop > ${i}/queue/scheduler fi done X6 Implementation Guide 1.tcp_wmem="8388608 8388608 8388608" To temporarily apply the changes immediately without a reboot.d/ lenovo-saphana file installed with the machine.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation 3.conf should be changed from 1 2 net.d/ibm-saphana / /etc/init.ipv4. To apply this configuration on boot.ipv4. or Completely Fair Queuing) to the noop mode. create the file /etc/init. 2015 212 .ipv4.tcp_rmem="8388608 8388608 8388608" net.local with the following lines: 1 2 3 #!/bin/bash sysctl -w net. Insert the scheduler change into this script Insert at line 30: 1 echo noop > ${i}/queue/scheduler Before the change lines 26-31 looks like: 1 2 3 4 5 6 for i in /sys/block/sd* . do if [ -d $i ].ipv4. do if [ -d $i ]. run the following commands: 1 2 sysctl -w net.9.local The lines 17-18 in /etc/sysctl.tcp_wmem="8388608 8388608 8388608" 4. To have these changes applied on boot. This algorithm change increases I/O performance on SAP HANA. Linux I/O Scheduler Adjustment The Linux I/O scheduler should be changed from the default mode (CFQ.d/after.ipv4.ipv4.tcp_wmem="8388608 8388608 8388608" Make the file executable: 1 chmod 755 /etc/init. then echo $QUEUESIZE > $i/queue/nr_requests echo $QUEUEDEPTH > $i/device/queuedepth fi done Afterwards lines 26-32 looks like: 1 2 3 4 5 6 7 for i in /sys/block/sd* .tcp_rmem="8388608 8388608 8388608" sysctl -w net. TCP Window Adjustment These settings adjust the network receive and transmit buffers for all connections in the OS.tcp_rmem="4096 262144 8388608" net. you can edit the /etc/init.tcp_rmem="8388608 8388608 8388608" sysctl -w net.ipv4. These settings are raised from their defaults in order to increase performance on scale-out systems.tcp_wmem="4096 262144 8388608" to 1 2 net. perform the following command for each disk entry (sda.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 213 .Technical Documentation To temporarily apply the settings immediately without a reboot. etc) in /sys/block/ 1 echo noop > /sys/block/sda/queue/scheduler X6 Implementation Guide 1. sdb.9. Technical Documentation E Lenovo X6 Server MTM List & Model Overview Starting with the support of Intel Xeon IvyBridge EX family of processors. -HUx3 -HYx2 .XL. a U for the USA.L. 6241-AC3 6241-AC3 6241-AC3. -HQx3 Table 57: Lenovo MTM Mapping & Model Overview 1 2 3 Only IvyBridge processors with DDR3 DIMMs Only Haswell processors with DDR3 DIMMs Only Haswell processors with DDR4 DIMMs X6 Implementation Guide 1. SAP had named these "T–Shirt" sizes of S. The following table shows the SAP HANA T-Shirt Sizes to Machine Type Model (MTM) code mapping. While the Machine Type is 6241. 512. -HSx3 -HWx2 . 6241-AC3 6241-AC3 6241-AC3. 6241-AC3. 6241-AC3 6241-AC3 6241-AC3 6241-AC3 6241-AC3 6241-AC3 -HZx2 . for example. etc. for example 128. Chassis CPUs Memory 128GB 256GB 2 384GB 512GB 256GB 512GB 4U 768GB 4 1TB 1. -H3x1 . 2015 214 . -HTx3 -HXx2 . -HRx3 -HVx2 . 6241-AC3 6241-AC3 6241-AC3. Each of these servers are orderable with the proper components to fulfill the SAP pre-configured system sizes. -H6x1 .9. etc. The new naming convention is purely based on the amount of memory each predefined configuration should contain.5TB 2TB 3TB 4TB 6TB Usage Standalone Standalone Scale-out Standalone Standalone Scale-out Standalone Standalone Scale-out Standalone Standalone Scale-out Standalone Standalone Standalone Standalone Standalone Model AC32S128S AC32S256S AC32S256C AC32S384S AC32S512S AC32S512S AC34S256S AC34S512S AC34S512C AC34S768S AC34S1024S AC34S1024C AC34S1536S AC34S2048S AC34S3072S AC34S4096S AC34S6144S Possible Model 6241-AC3. -H2x1 . The last x in the MTM is a placeholder for the region code the server was sold in. SAP has changed there naming of the models.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. -H4x1 . -H5x1 . Previously. the different Models are shown below.M. 256. 128/256/. 3. -HBx1 .. S/C designates the intended usage. 2015 215 . -HGx2 . -HFx2 . -HHx3 6241-AC4 6241-AC4 6241-AC4.5TB 2TB 8 3TB 4TB 6TB 8TB 12TB Usage Standalone Standalone Scale-out Standalone Standalone Scale-out Standalone Standalone Standalone Scale-out Standalone Standalone Scale-out Standalone Scale-out Standalone Scale-out Standalone Scale-out Standalone Scale-out Standalone Scale-out Model AC44S256S AC44S512S AC44S512C AC44S768S AC44S1024S AC44S1024C AC44S1536S AC44S2048S AC48S512S AC48S1024C AC48S1536S AC48S2048S AC48S2048C AC483072S AC483072C AC48S4096S AC48S4096C AC48S6144S AC48S6144C AC48S8192S AC48S8192C AC48S12288S AC48S12288C Possible Model 6241-AC4 6241-AC4. -HDx1 . -HIx3 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4.5TB 2TB 512GB 8U 1. 2S/4S/8S give the number of installed CPU books and by this the number of populated CPU sockets. A 6241-H2* is configured with 2 CPUs in a 4 Socket chassis with 128GB RAM and will be recognized as a AC32S128S by the installation and any installed scripts. X6 Implementation Guide 1. -HEx2 . When upgrading this machine with additional 128GB of RAM. AC3/AC4 is describing the server chassis.. AC3 are 4 rack unit sized servers for up to 4 CPU books. -HJx3 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 6241-AC4 Table 58: Lenovo MTM Mapping & Model Overview 1 2 3 Only IvyBridge processors with DDR3 DIMMs Only Haswell processors with DDR3 DIMMs Only Haswell processors with DDR4 DIMMs The model numbers follow this schema: 1. 2. 4. while the burned-in MTM will still show 6241-H2* or 6241-AC3.9. -HCx1 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. is the size of the installed RAM in GB. AC4 servers are 8 rack unit sized server for up to 8 CPU books. These model numbers describe the current configuration of the server. the installation and already installed script will show the model as AC32S256S.Technical Documentation Chassis CPUs Memory 256GB 512GB 4 768GB 1TB 1. either S for Standalone/Single Node or C for Cluster/Scale-out nodes. 1 FAQ #1: SAP HANA Memory Limits Problem: If left unconfigured. Newer cluster installations have this value set and single nodes are not affected by this parameter at all. Please calculate the combined memory allocation for HANA so that at least 25GB are free for other programs.ini file. It is recommended to configure this value. An unconfigured HANA system is a system lacking a global_allocation_limit setting in the HANA system’s global. Information on how to run the support script can be found in the Operations Guide. XS sized servers have only 128GB RAM installed of which even a X6 Implementation Guide 1.3 Basic System Check. In this case the so called "OOM Killer" of Linux gets triggered which will terminate running processes at random and in most cases will kill SAP HANA or GPFS first. each installed and running HANA instance may use up to 97% (90% in older HANA revisions) of the system’s memory.sh can detect various known problems in your appliance. leading to service interruption.com/hana_appliance/. You can find the latest version attached to SAP Note 1661146 – Lenovo Check Tool for SAP HANA appliances. The support script saphana-support-ibm. the support script will give an FAQ entry number. Solution: Execute the following command on any cluster node at any time: 1 # mmchconfig readReplicaPolicy=local This can be done during normal operation and the change becomes effective immediately for the whole GPFS cluster and is persistent over reboots. 2015 216 .sap.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Please configure the memory limits as described there. This can be done by setting the global_allocation_limit parameter in the systems’ global.ini configuration files. If multiple unconfigured HANA systems or misconfigured HANA systems are running on the same machine(s) "Out of Memory" situations may occur.1: FAQ #1: SAP HANA Memory Limits on page 216. F. Please use always the latest support script which may detect new issues found after installing your appliance.2 FAQ #2: GPFS parameter readReplicaPolicy Problem: Older cluster installations do not have the GPFS parameter "readReplicaPolicy" set to "local" which may improve performance in certain cases. section 2. When in doubt please contact Lenovo support via SAP’s OSS ticket system. F. Misconfigured SAP HANA systems are multiple systems running at the same time with a combined memory limit over 90% of the physical installed memory.Technical Documentation F Frequently Asked Questions Warning These FAQ entries are only valid for certain appliance models and versions. Solution: Please configure the global allocation limit for all systems running at the same time. Do not apply the changes in this list until advised by either the support script or Lenovo support.9. F. Please use only the physically installed memory for your calculation.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines Problem: For a general description of the SAP HANA memory limit see Appendix F. More information on the parameter global_allocation_limit can be found in the "HANA Administration Guide" at http://help. In case such a problem is found. Please follow only the instructions given in the particular entry. The recommended value is 112GB if the GPFS page pool size is set to 4GB (see FAQ #12: GPFS pagepool should be set to 4GB) and 100GB or less if the GPFS page pool is set to 16GB. Do not deselect "Single Node Installation". F.Added for HANA Developer Studio – libicu . To prevent installing with the same error again.Added for HANA Developer Studio – ntp – sudo – syslog-ng X6 Implementation Guide 1. please calculate the total memory allocation for HANA so the sum does not exceed the recommended value.Added since revision 48 (SPS04) – mozilla-xulrunner192-* . 2015 217 . Solution: Ensure that the packages listed below are installed on your appliance. you may obtain it again from the SUSE Customer Center respectively Red Hat.5 FAQ #5: Missing RPMs Problem: An upgrade of SAP HANA or another SAP software component fails because of missing dependencies. Consider any data stored in /sapmnt to be corrupted even if the file system check finds no errors. Overlapping means that the whole drive (e. This leaves too little memory for other processes which may trigger Out-Of-Memory situations causing crashes. If multiple systems are running at the same time. Solution: Please configure the global allocation limit for the installed SAP HANA system to a more apropriate value. • SUSE Linux Enterprise Server for SAP Applications – libuuid – gtk2 . Solution: The only solution is to reinstall the appliance from scratch.5% equaling 119GB (older revisions of HANA used 90% = b 115GB) if no lower memory limit is configured. If you no longer have the SLES for SAP DVD or RHEL DVD (depending on what OS you are using) that had been delivered with your system. As some of these package dependencies were added by SAP HANA after your system was initially installed. In the end at some point the whole device NSD will overwrite the partition table and the partition NSD is lost and GPFS will fail. /dev/sdb) as well as a partition on the same device (e. F. More information on the parameter global_allocation_limit can be found in the "HANA Administration Guide" at http://help.Technical Documentation single SAP HANA system will use up to 93.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Please use only the physically installed memory for your calculation. the single node installation must be completed in phase 2 of the guided installation. each NSD will overwrite and corrupt data on the other NSD.g.4 FAQ #4: Overlapping NSDs Problem: Under some rare conditions single node SSD or XS/S gen 2 models may be installed with overlapping NSDs.sap. /dev/sdb2) may be configured as NSDs in GPFS. you may install those missing packages and still receive full support of the Lenovo Systems solution. Please configure the memory limits as described there.g.com/hana_appliance/.Added for HANA Developer Studio – java-1_6_0-ibm . This is the most common situation where the problem will be noticed. As GPFS is writing data on both NSDs. First Check to see if the SUSE Linux Enterprise Server is already added as an repository: 1 # zypper repos 2 3 4 5 # | Alias | Name | Enabled | Refresh --+----------------+----------------+---------+-------1 | SUSE-Linux-.Technical Documentation – tcsh – libssh2-1 .Added since revision 53 (SPS05) – expect . 1 2 3 # zypper addrepo --type yast2 --gpgcheck --no-keep-packages\ --refresh --check dvd:///?devices=/dev/sr1 \ "SUSE-Linux-Enterprise-Server-11-SP1_11.1" 4 5 6 7 8 9 10 11 12 This is a changeable read-only media (CD/DVD).. It is possible to add the DVD that was included in your appliance install as a repository and from there install the necessary RPM package..138 | Yes | No | cd:///?devices=/dev/sr0 X6 Implementation Guide 1.. disabling autorefresh.1. Missing packages can be installed from the SLES for SAP DVD shipped with your appliance using the following instructions.1.1. Another possibility is to copy the DVD to a local repository and add this repository to zypper.3.3-1.1' media Retrieving repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.→ 1 | SUSE-Linux-Enterprise-Server-11-SP3 11. First find out if the local repository is a DVD repository 1 2 3 4 # zypper lr -u # | Alias | Name ←. 2015 218 .3-1.1' metadata [done] Building repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.3.Added since revision 53 (SPS05) – yast2-ncurses .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.→-11-SP3 11.1' [done] Repository 'SUSE-Linux-Enterprise-Server-11-SP1_11. | Yes | No If it doesn’t exist.1. please place the DVD in the drive (or add it via the Virtual Media Manager) and add it as a repository.Added since revision 53 (SPS05) • Red Hat Enterprise Linux: At the moment there are no known packages that have to be installed additionally.9..1.Added since revision 53 (SPS05) – autoyast2-installation . | SUSE-Linux-.→ | Enabled | Refresh | URI --+--------------------------------------------------+----------------------------------------------. This example uses the SLES for SAP 11 SP1 media.1' successfully added Enabled: Yes Autorefresh: No GPG check: Yes URI: dvd:///?devices=/dev/sr1 13 14 15 16 17 18 19 Reading data from 'SUSE-Linux-Enterprise-Server-11-SP1_11. that you always have to insert the DVD into the DVD-Drive or mounted via VMM or KVM. Adding repository 'SLES-for-SAP-Applications 11.138 | SUSE-Linux-Enterprise-Server←.1' cache [done] The drawback of this solution is. .. 2015 219 .19..0 KiB will be used.9. # zypper search libssh 1 2 Loading repository data.→-11-SP3 11.. 55.0+20080814-2. 3 4 5 S | Name | Summary | Type --+-----------+-------------------------------------+-------| libssh2-1 | A library implementing the SSH2 .3.. Reading installed packages.16. additional 144. | package 6 7 8 Then install the package: # zypper install libssh2-1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 : : Loading repository data.1. you should see an output similar to this 1 2 3 4 5 # zypper lr -u # | Alias | Name ←.0+20080814-2.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.x86_64 (1/1).19. Overall download size: 55. Reading installed packages.→ 1 | SUSE-Linux-Enterprise-Server-11-SP3 | SUSE-Linux-Enterprise-Server←..3-1.→-11-SP3 | Yes | Yes | file:/var/tmp/install/sles11/ISO/ 2 | SUSE-Linux-Enterprise-Server-11-SP3 11.rpm [done] Installing: libssh2-1-0.0 KiB.16.1 [done] X6 Implementation Guide 1.1.16. 1 new package to install.0+20080814-2. Resolving package dependencies.3-1.→install/sles11/ISO/ "SUSE-Linux-Enterprise-Server-11-SP3" Adding repository 'SUSE-Linux-Enterprise-Server-11-SP3' [done] Repository 'SUSE-Linux-Enterprise-Server-11-SP3' successfully added Enabled: Yes Autorefresh: Yes GPG check: Yes URI: file:/var/tmp/install/sles11/ISO/ For verification you can list the repositories again.0 KiB unpacked) Retrieving: libssh2-1-0.x86_64.3.→ | Enabled | Refresh | URI --+--------------------------------------------------+----------------------------------------------.19.138 | Yes | No | cd:///?devices=/dev/sr0 Then search to ensure that the package can be found.0 KiB (144. This example searches for libssh.. After the operation....138 | SUSE-Linux-Enterprise-Server←.Technical Documentation Copy the DVD to a local Directory 1 # cp -r /media/SLES-11-SP3-DVD*/* /var/tmp/install/sles11/ISO/ Register the directory as a repository to zypper 1 2 3 4 5 6 7 # zypper addrepo --type yast2 --gpgcheck --no-keep-packages -f file:///var/tmp/←... Continue? [y/n/?] (y): Retrieving package libssh2-1-0. →LTD" ]. Solution: On all nodes append the following lines to the file /etc/rc.5.7 FAQ #7: No disk space left bug (Bug IV33610) Problem: Starting HANA fails due to insufficient disk space.4 to 3.local: 1 2 3 4 5 6 7 8 9 bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor) # Phoenix Technologies LTD means we are running in a VM and governors are not ←.4 has been discontinued from IBM.4.4.23. X6 Implementation Guide 1.5 as support for GPFS 3. Download and apply GPFS version 3. or type them one by one. This performance boost was not quantified by the development team.9. Refer to the section 13. SAP advised to use the governor "performance" as the ondemand governor will impact HANA performance due to too slow CPU upscaling by this governor. one by one. 2015 220 . F.0-20 which will cause GPFS to step into a read-only mode. If there are. In case of an upgrade you also need to change the governor setting. or login in with ssh using the sidadm user. If you are still running SLES for SAP 11 SP1 based appliances. then /sbin/modprobe acpi_cpufreq for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor do echo performance > $i done fi The setting will change on the next reboot.Technical Documentation F. This problem is due to a bug in GPFS versions between 3. Copy & paste all the lines at once. You can also change safely the governor settings immediately by executing the same lines at the shell. Since appliance version 1.0-12 and 3. Note It is recommended that you consider upgrading your GPFS version from 3.d/boot.→available if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies ←. rc=28: No space left on device. run 1 kill -9 proc_pid to shut them down.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. Then run: 1 HDB info to see if there is any HANA processes running. See SAP Note 1846872 – "No space left on device" error reported from HANA. By default Linux uses the governor "ondemand" which will dynamically throttle CPUs up and down depending on CPU load. you may also change this setting to trade in power saving for performance.6 FAQ #6: CPU Governor set to ondemand Problem: Linux is using a technology for power saving called "CPU governors" to control CPU throttling and power consumption. Solution: Make sure to shutdown all HANA nodes by issuing shutdown command from the studio.0.4. Using the command ’df’ will show that there is still disk space left.53-5 (or simply SLES for SAP 11 SP2 based appliances) we changed the CPU governor to performance. The following error message will be found in indexserver or nameserver trace: 1 Error during asynchronous file transfer.5: Updating GPFS on page 167 for information about how to upgrade GPFS. This happens only under heavy load and each controller reset may cause service interruption. We recommend to turn off the processor C-States using the Linux kernel boot parameter: processor. this change needs to be done on each server of the cluster. x3950 X6) a serious issue in various firmware versions of the ServeRAID M5210 RAID adapter has been found which can trigger continuous controller resets. Future appliance versions will be have the workaround for the controller reset issue preinstalled while the performance issue can be only solved by an up. add the following start parameter to the kernel’s boot loader configuration file: intel_idle. To prevent the ’intel_idle’ driver from ignoring BIOS or UEFI settings for C-States. This is not the preferred state for the SAP HANA appliance and must be changed. Warning For clustered configurations. the control (’C’) states of the Intel processor should to be turned off for the most reliable performance of SAP HANA.or downgrade to an unaffected firmware version. Only servers using the ServeRAID M5120 controller for attaching an external SAS enclosure are affected. Do not try to reboot more servers than stand-by nodes are active For further information please refer to the SUSE knowledgebase article.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. but these versions show severely degraded I/O performance.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues Problem: After the initial release of the new X6-based servers (x3850 X6.Linux Ignores C-State Settings in Unified Extensible Firmware Interface (UEFI).max_cstate=0 1 The Linux kernel used by SAP HANA includes a built-in driver (’intel_idle’) which will ignore any C-State limits imposed by Basic Input/Output System (BIOS)/Unified Extensible Firmware Interface (UEFI) when it is active.ibm. This driver may cause issues by enabling C-States even though they are disabled in the BIOS or UEFI.py script after patching GPFS to make sure that your database is consistent. or when you have an active standby node to take over the rebooting systems HANA services.9. By default C-States are enabled in the UEFI due to the fact that we set the processor to Customer Mode. F. 2015 221 . Certain firmware versions do not exhibit this issue. With C-States being turned on you might see performance degradations with SAP HANA.Technical Documentation SAP highly recommends that you run uniqueChecker.8 FAQ #8: Setting C-States Problem: Poor performance of SAP HANA due to Intel processor settings. Solution: As recommended in the SAP Notes 1824819 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP2 and 1954788 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3 and additionally described in the IBM RETAIN Tip H20700035 . 35 http://www.lst) and reboot the server. This can cause minor latency as the CPUs transition out of a C-State and into a running state.max_cstate=0 1 Append both parameters to the end of the kernel command line of your boot loader (/boot/grub/menu. Only make this change when all servers can be rebooted at once. F.com/support/entry/portal/docdisplay?lndocid=migr-5091901 X6 Implementation Guide 1. 0-0019 23.9.9.d/ibm-saphana and change the lines 1 2 3 4 5 6 7 function start() { QUEUESIZE=1024 for i in /sys/block/sd* . then echo $QUEUESIZE > $i/queue/nr_requests echo $QUEUEDEPTH > $i/device/queue_depth fi done by inserting lines 3 & 7.7.0-0024 (or newer. Check which FW Package Build is installed on all M5120 RAID controllers: 1 # /opt/MegaRAID/storcli/storcli64 -AdpAllInfo -aAll | grep 'M5120' -B 5 -A 3 2 3 Adapter #1 4 5 6 7 8 ============================================================================== Versions ================ Product Name : ServeRAID M5120 X6 Implementation Guide 1. 23.0-0016. do if [ -d $i ]. do if [ -d $i ]. 23. if listed as stable by Lenovo SAP HANA Team) and to change the following configuration value in the installed OS.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.22.0-0018.1-0010. please edit /etc/init. then echo $QUEUESIZE > $i/queue/nr_requests fi done to this version (if not already set) 1 2 3 4 5 6 7 8 9 function start() { QUEUESIZE=1024 QUEUEDEPTH=250 for i in /sys/block/sd* . 23.0-0027 Table 59: ServeRAID M5120 Firmware Issues Solution: The current recommendation is to use firmware version 23.12. Both can be done after installation. F.12.9.2 Use recommended Firmware version 1.16. 23. The new settings will be set on the next reboot or by calling 1 # service ibm-saphana start Please ignore any output.12.0-0011.1 Changing Queue Depth On the installed appliance. 2015 222 .Technical Documentation Non-exhaustive list of known affected firmware versions: Issue Controller resets Lowered Performance Affected versions 23.16. F. 2015 223 . Download the 23. • Cluster installations: – GPFS 3. the following prerequisites must all apply: X6 Implementation Guide 1.0-0024_linux_32-64. do cat /sys/block/${dev}/device/queue_depth .sh) checks if the parameter is set correctly./ibm_fw_sraidmr_5100-23.5.3. 2.0-15: The parameter must be set to "no".22.0-16 or higher: The parameter must be set to "yes". When upgrading to GPFS 3.0-0024 is recommended.9.biz/BdRatD. The support script (saphana-support-ibm.5. Please reboot the server after updating all M5120 controllers. Please note the following: • Single node installations: Single node installations are not affected by this parameter. • DR cluster installations: The parameter must be set to "yes".bin . Newer UEFI firmware releases offer an option to repair damaged GPTs and if activated the UEFI may try to recover the primary GPT from the backup copy during boot-up.22.0-13 . adjust the setting: 1 2 # mmchconfig enableLinuxReplicatedAIO=no # mmchconfig enableLinuxReplicatedAIO=yes F. This will destroy the NSD header and in case of single nodes this leads to the loss of all data in the GPFS filesystem.5.0-16 or higher you have to manually set the value to "yes". – GPFS 3. Make the downloaded file executable and then run it: chmod +x ibm_fw_sraidmr_5100-23. Warning Instead of setting the parameter to "no" we highly recommend to upgrade GPFS to 3.5.Technical Documentation 9 10 Serial No : xxxxxxxxxx FW Package Build: 23.bin -s 3.0-0024_linux_32-64. When the NSD is created parts of the primary GPT header are overwritten. done F. It can be set to "yes" or "no". version 23. After reboot: Check if the queue depth is set to 250 for all devices on M5120 RAID controller: 1 for dev in $(lsscsi |grep -i m5120 |grep -E -o '/dev/sd[a-z]+'| cut -d '/' -f3)←. If it is not set correctly.11 FAQ #11: GPFS NSD on Devices with GPT Labels Problem: In some very rare occasions GPFS NSDs may be created on devices with a GUID Partition Tables (GPT). 4.5.0-0024 FW package for ServeRAID 5100 SAS/SATA adapters via IBM Fixcentral or use following direct link: https: //ibm. To cause this issue.22.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO With GPFS version 3.5.22.0-16 or higher.0-13 the new GPFS parameter enableLinuxReplicatedAIO was introduced.22.0-0024 Currently.→ . Technical Documentation • A storage device used as a NSD in a GPFS filesystem must have a GPT before the NSD was created. This can only happen if the drive or RAID array was used before and has not been wiped or reassembled. As part of the HANA appliance, GPT labels on non-OS disks are only created as part of the mixed eX5/X6 clusters. If a system was only used for the HANA appliance, this cannot occur unless there was a misconfiguration. • GPFS 3.4 or GPFS 3.5 was used when the NSD and the filesystem was created, either during installation or manually after installation, regardless of the current running GPFS version. GPFS 4.1 uses protective partition tables to prevent this issue when creating new NSDs. • An UEFI version with GPT recovery functionality is either installed or an upgrade to such a version is planned. Further risk comes from the UEFI upgrade as these new UEFI versions will enable the GPT recovery by default. The probability for this combination is very low. Solution: If the support script pointed you to this FAQ entry, please contact Lenovo Support via SAP’s OSS Ticket System and put the message on the Queue BC-OP-LNX-IBM. Please prepare a support script dump as described in SAP Note 1661146 – Lenovo Check Tool for SAP HANA appliances. The Lenovo support will then devise a solution for your installation. When the ASU tool is installed, run the command 1 # /opt/lenovo/toolscenter/asu/asu64 show | grep -i gpt If the Lenovo Systems Solution for SAP HANA Platform Edition was installed with an ISO image below version 1.9.96-13, the ASU tool will reside here: 1 # /opt/ibm/toolscenter/asu/asu64 show | grep -i gpt The setting has various names, but any variable named GPT and Recovery should be set to "None". If it is set to "Automatic" do not reboot the system. If there is no such setting, do not upgrade the UEFI firmware until the GPTs have been cleared. F.12 FAQ #12: GPFS pagepool should be set to 4GB Problem: GPFS is configured to use 16GB RAM for its so called pagepool. Recent tests showed that the size of this pagepool can be safely reduced to 4GB which will yield 12GB of memory for other running processes. Therefore it is recommended to change this parameter on all appliance installations and versions. Updated versions of the support script will warn if the pagepool size is not 4GB and will refer to this FAQ entry. Solution: Please change the pagepool size to 4GB. Execute 1 # mmchconfig pagepool=4G to change the setting cluster-wide. This means this command needs to be run only once on Single Node and clustered installation. The pagepool is allocated during the startup of GPFS, so a GPFS restart is required to activate the new setting. Please stop HANA and any processes that access GPFS filesystems before restarting GPFS. To restart GPFS execute 1 2 # mmshutdown # mmstartup In clusters all nodes need to be restarted. You can do this one node at a time or restart all nodes at once by adding the parameter -a to both commands. In the latter case please make sure no program is accessing GPFS filesystems on any node. X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 224 Technical Documentation To verify the configured pagepool size run 1 # mmlsconfig | grep pagepool To verify the current active pagepool size run 1 # mmdiag --config and search for the pagepool line. This value is shown in bytes. F.13 FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506 Problem: SLES offers an option to limit the size of the page cache pool. Per default the page cache size is umlimited. SAP recommends in SAP Note 1557506 – Linux paging improvements to limit this page cache to 4GB of RAM. This may improve resilience against Out-Of-Memory events. Future appliance software versions will set this value by default. RHEL does currently not offer this option. Solution: Add the following line to file /etc/sysctl.conf: 1 vm.pagecache_limit_mb = 4096 and run 1 # sysctl -e -p to activate this value without a reboot. This change can be done without a downtime. F.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup GPFS 3.5 and higher come with the new parameter restripeOnDiskFailure. The GPFS callback script start-disks-on-startup automatically installed on the Lenovo Solution is superseded by this parameter – IBM GPFS NSDs are automatically started on startup when restripeOnDiskFailure is activated. On DR cluster installations, neither the callback script nor restripeOnDiskFailure should be activated. Solution: To enable the new parameter on all nodes in the cluster execute: 1 # mmchconfig restripeOnDiskFailure=yes -N all To remove the now unnecessary callback script start-disks-on-startup execute: 1 # mmdelcallback start-disks-on-startup X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 225 Technical Documentation G G.1 References Lenovo References Lenovo Solution Documentation • Lenovo Systems Solution for SAP HANA Quick Start Guide • Lenovo Systems X6 Solution for SAP HANA Implementation Guide • SAP Note 1650046 – Lenovo Systems X6 Solution for SAP HANA Operations Guide Lenovo System x Documentation • IBM X6 Portfolio Overview Redbook • IBM eX5 Portfolio Overview Redbook • IBM System Storage EXP2500 Express Specifications • Lenovo RackSwitch G8052 Redbook • Lenovo RackSwitch G8124E Redbook • Lenovo RackSwitch G8264 Redbook • Lenovo RackSwitch G8272 Redbook • Lenovo RackSwitch G8296 Redbook • LNVO-ASU – Lenovo Advanced Settings Utility (ASU) • LNVO-DSA – Lenovo Dynamic System Analysis (DSA) • MIGR-5090923 – IBM SSD Wear Gauge CLI utility G.2 IBM References IBM General Parallel File System Documentation • IBM General Parallel File System Documentation • GPFS FAQ (with supported OS levels) • GPFS Service on IBM Fix Central (IBM ID required) for GPFS 3.5.0 • GPFS Books – IBM developerWorks Article: GPFS Quick Start Guide for Linux • GPFS Support in IBM Support Portal (IBM ID required) G.3 SAP General Help (SAP Service Marketplace ID required) • SAP Service Marketplace • SAP Help Portal • SAP HANA Ramp-Up Knowledge Transfer Learning Maps • SAP HANA Software Download at SAP Software Download Center → Support Packages and Patches / Installations and Upgrades → A–Z Index → H (for SAP HANA) X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 226 Technical Documentation G.4 SAP Notes (SAP Service Marketplace ID required) Generic SAP Notes about SAP HANA • SAP Note 1730996 – Unrecommended external software and software versions • SAP Note 1730929 – Using external tools in an SAP HANA appliance • SAP Note 1803039 – Statistics server CHECK_HOSTS_CPU intern. error when restart SAP Notes about the Lenovo Systems Solution for SAP HANA • SAP Note 1650046 – Lenovo SAP HANA Appliance Operations Guide • SAP Note 1661146 – Lenovo Check Tool for SAP HANA appliances • SAP Note 1880960 – Lenovo Systems Solution for SAP HANA Platform Edition Customer Maintenance SAP Notes regarding SAP HANA • SAP Note 1523337 – SAP HANA Database 1.00 - Central Note • SAP Note 2159166 – SAP HANA SPS 09 Database Revision 96 • SAP Note 1681092 – Multiple SAP HANA databases on one SAP HANA system • SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery • SAP Note 1780950 – Connection problems due to host name resolution • SAP Note 1829651 – Time zone settings in HANA scale out landscapes • SAP Note 1743225 – Potential failure of connections with scale out nodes • SAP Note 1888072 – SAP HANA DB: Indexserver crash in __strcmp_sse42 • SAP Note 1890444 – Slow HANA system due to CPU power save mode SAP Notes regarding SUSE Linux Enterprise Server for SAP Applications • SAP Note 784391 – SAP support terms and 3rd-party Linux kernel drivers • SAP Note 1310037 – SUSE LINUX Enterprise Server 11: Installation notes • SAP Note 1954788 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3 • SAP Note 618104 – Linux SAP System Information Tool • SAP Note 1056161 – SUSE Priority Support for SAP applications • SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 SAP Notes regarding Red Hat Enterprise Linux • SAP Note 2013638 – SAP HANA DB: Recommended OS settings for RHEL 6.5 • SAP Note 2136965 – SAP HANA DB: Recommended OS settings for RHEL 6.6 • SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 SAP Notes regarding IBM GPFS • SAP Note 1084263 – Cluster File System: Use of GPFS on Linux • SAP Note 1902281 – GPFS 3.5 incompatibility with Linux kernel 3.0.58 and higher X6 Implementation Guide 1.9.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo, 2015 227 5 Novell SUSE Linux Enterprise Server References Currently Supported • SUSE Linux Enterprise Server 11 SP3 Release Notes • SUSE Linux Enterprise Server for SAP Applications 11 SP3 Media G. 2015 228 .96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo.Technical Documentation • SAP Note 2051052 – GPFS "No space left on device" when df shows free space SAP Notes regarding Virtualization • SAP Note 1122387 – Linux: SAP Support in virtualized environments G.6 Red Hat Enterprise Linux References (Red Hat account required) • Red Hat Enterprise Linux 6 Why can I not install or start SAP HANA after a system upgrade? • Red Hat Enterprise Linux 6 Red Hat Enterprise Linux for SAP HANA: system updates and supportability X6 Implementation Guide 1.9. Technical Documentation H Changelog This section describes the changes that have been done within a release version since it was published.9. X6 Implementation Guide 1.96-13 Lenovo® Systems Solution™ for SAP HANA® © Copyright Lenovo. 2015 229 .
Copyright © 2024 DOKUMEN.SITE Inc.