SAP_Note_1872170_v74



Comments



Description

www.sap.com FAQ of SAP Note 1872170 „S/4HANA and Suite on HANA memory sizing” © 2013 SAP AG or an SAP affiliate company. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. National product specifications may vary. These materials are provided by SAP AG and its affiliated companies (“SAP Group”) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty. SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Please see http://www.sap.com/corporate-en/legal/copyright/index.epx#trademark for additional trademark information and notices. TABLE OF CONTENTS 1 HOW TO INSTALL THE REPORT? ...................................................................................................... 4 2 WHAT TO DO BEFORE RUNNING THE REPORT? .............................................................................. 4 3 WHAT OPTIONS SHOULD BE CHOSEN IN THE SELECTION SCREEN ON NON-HANA SYSTEMS? ........ 5 4 WHAT OPTIONS SHOULD BE CHOSEN IN THE SELECTION SCREEN ON HANA SYSTEMS? ................. 6 5 HOW TO DO SIZING FOR S/4 HANA?............................................................................................... 7 6 HOW TO MONITOR THE EXECUTION OF THE REPORT? ................................................................... 7 7 WHAT TO DO ONCE THE REPORT IS FINISHED? ............................................................................... 7 8 HOW TO ANALYSE THE LIST OF ERRORS? ........................................................................................ 8 9 THE SIZING RESULT IS HIGHER THAN EXPECTED, WHAT COULD BE THE REASON? ........................ 10 10 HOW TO INTERPRET THE RESULTS OF THE SIZING REPORT? ......................................................... 11 10.1 Sizing result..........................................................................................................................11 10.2 Sizing calculation details ......................................................................................................12 10.3 Clean-up details ...................................................................................................................13 10.4 Clean up calculation details .................................................................................................15 10.5 Sizing of the upgrade shadow instance ................................................................................17 10.6 State of archiving .................................................................................................................17 11 WHAT TO DO IF ONCE MIGRATED TO HANA THE MEMORY CONSUMPTION IS DIFFERENT FROM THE ESTIMATED ONE? ........................................................................................................ 18 12 WHAT ARE LOBS AND HYBRID LOBS?............................................................................................ 19 13 HOW TO DO DISK SIZING FOR HANA?........................................................................................... 19 14 WHY IS THE RESULT SLIGHTLY DIFFERENT BETWEEN TWO SUCCESSIVE RUNS? ............................ 20 15 DOES THE REPORT CONSIDER ALL TABLES RELEVANT TO SIZING? ................................................ 20 16 DOES THE SIZING RESULT DEPEND ON THE COMPRESSION TECHNOLOGY USED IN THE SOURCE SYSTEM OR WHETHER THE SYSTEM IS UNICODE OR NON-UNICODE?............................. 20 17 DOES THE REPORT CONSIDER THAT SOME SECONDARY INDEXES WILL BE DELETED WHEN MIGRATING TO HDB?.................................................................................................................... 20 18 HOW DOES THE SIZING REPORT KNOW WHETHER A TABLE WILL BE IN THE ROW STORE OR THE COLUMN STORE? ................................................................................................................... 20 19 HOW TO SIZE AN SOH SYSTEM ONCE THE MIGRATION IS DONE? ................................................. 22 20 DOES THE REPORT TAKE IN ACCOUNT CHANGES IN THE DATA MODEL INTRODUCED BY SUITE ON HANA? .......................................................................................................................... 22 21 WHAT ARE TABLES PARTIALLY READ?........................................................................................... 22 22 NOTE REGARDING THE LIST OF LARGEST TABLES .......................................................................... 23 23 WHY IS THE REPORT MAKING A DIFFERENCE BETWEEN TABLES AND KEYS? ................................ 23 24 WHY IS THE REPORT NOT SIZING THE LIVECACHE? ....................................................................... 23 25 WHERE CAN I FIND THE RESULT OF THE REPORT?......................................................................... 24 1 HOW TO INSTALL THE REPORT? Check the below decision tree to find out what installation path is suitable for your system. 2 WHAT TO DO BEFORE RUNNING THE REPORT? The report ZNEWHDB_SIZE can be implemented in your customer namespace. It requires at least SAP_BASIS 620. It is also available in ST-PI 2008_1_[700-710] SP09 and ST-PI 740 SP00 and above under the name /SDF/HDB_SIZING. Both reports have the same code and functionalities. The only difference is the installation procedure. The reports are suitable for sizing of all NetWeaver based products at the exception of BW. Before running the report you must: • Test the report execution in the Development and QA system. The report implementation can be tested easily on one small table (i.e. T000). • Since the parallelism is achieved via RFC calls to dialog work processes, the dialog timeout parameter “rdisp/max_wprun_time” must be set high enough. We recommend at the very minimum 3600 seconds (7200 is recommended). If you run into timeout errors and cannot increase the timeout limit, decrease the sample size or chose to execute the report on a specific server group where the timeout limit is higher. • Make sure the database statistics are up-to-date. • Choose where to run the report. It can run on a recent copy of the production system. • Plan some long execution time. The runtime varies with the sampling size chosen, the size of the database, the database technology and the parallelism. The runtime of a full database sizing will exceed 1 hour in most cases. • The report requires authorization object 'S_ADMI_FCD' with authorization field PADM. • Check the data store distribution. If you plan to have a special distribution of tables across the Row and Column store, this should be specified in the report. See question 18 for details. 3 WHAT OPTIONS SHOULD BE CHOSEN IN THE SELECTION SCREEN ON NON- HANA SYSTEMS? For a standard sizing of your system, choose the number of parallel processes and execute the report with the default options, leaving the select-options field empty. For a sizing of the entire database leave the “list of tables” fields empty. If you want to size only a subset of tables, enter the list in the select-option. We recommend not changing the sample size. A maximum of 100.000 records sampled per table is representative enough for a full database sizing. If you already know of specific table distribution between stores, specify it in the “Changes to standard stores distributions”. Otherwise, leave these fields empty. Refer to the question 18 for more details on this. If you run the report for all tables, run it in the background. 4 WHAT OPTIONS SHOULD BE CHOSEN IN THE SELECTION SCREEN ON HANA SYSTEMS? If you run the report in HANA, the selection screen is different. The parameter called “Size also data that is currently unloaded” controls if a tables should be fully loaded to memory before being measured. Choose to load data to memory if the HANA system has been freshly installed or if you are specifically interested in sizing the entire system including unused (unloaded) objects. Uncheck the parameter if your system is already in use for some time. In which case, all required objects should already be in memory and the object currently unloaded should not be part of the memory sizing. Unchecking this parameter helps for the performance of the report and should therefore be used when sizing live production system. The second parameter checks if hybrid LOBS should be read from disk. Reading the size on disk is very slow on HANA 1.0 prior to SP12. If you chose to untick this parameter hybrid LOB information will not be visible in the report output. You can calculate the size of hybrid LOB size on disk used by the ABAP schema with this SQL: SELECT ROUND(SUM(to_bigint(PHYSICAL_SIZE)) / 1024 / 1024 / 1024, 2) AS HYBRID_LOB_SIZE_IN_GB from m_table_lob_files where SCHEMA_NAME LIKE 'SAP%' To check the size of hybrid lob currently cached in memory, use this SQL: SELECT ROUND(INCLUSIVE_SIZE_IN_USE / 1024 / 1024 / 1024 , 2) from M_HEAP_MEMORY WHERE category = 'Pool/PersistenceManager/LOBContainerDirectory' 5 HOW TO DO SIZING FOR S/4 HANA? If a transition to S/4 HANA is possible on the sized system, more options appear in the selection screen under the “Choice of the Sizing scenario tab”: If you do not see these options in the selection screen, your system is not suitable for such products (i.e. it is a CRM system or already an S/4 HANA system). 6 HOW TO MONITOR THE EXECUTION OF THE REPORT? The progress of the report is visible in the job log. 7 WHAT TO DO ONCE THE REPORT IS FINISHED? Once the report is completed, you must: - Analyze the error log and correct them if necessary. See Question 8 for details. - Check for plausibility of the sizing result. The report result contains a list of top tables in the row and column stores (Note that the length of this list can be extended with the parameter available in the input screen). You must check if the number of records estimated by the report fits to the reality. The number of record is collected from statistics tables and is the most common source of sizing error. Perform this check for the top objects. If there are large deviation between the real record count and the estimated record count, the database statistics are not up-to-date and the sizing cannot be trusted. Detail estimations of all tables are available in database table /SDF/HDBSIZING. 8 HOW TO ANALYSE THE LIST OF ERRORS? In the output of the program, a list of tables is given where the data collection could not be performed. For example, below an error is reported on table /SDF/SMON_WPINFO. The most common reason for this is an inconsistency in the ABAP dictionary. For example, more fields exist on database than in the ABAP dictionary or the table is partly active. Check the definition of these tables in the dictionary and try to repair the inconsistency. If you cannot find the problem, check the size of the tables in the source system and make sure the tables are big enough (>50GB on disk) to be relevant for sizing and add these tables to your sizing calculation. Alternatively, create a message under component SV-BO-DB. The table below gives some explanation to some common errors. Error Comment code/message -1 No Statistics available. Unable to get the number of rows. To fix this error you must start statistics collection on the erroneous tables. Older version of the report might report unjustified error -1. Always use the latest version. 1 Function module DB_GET_TABLE_FIELDS returned no results. The table does not exist in the database but exists in DDIC. 2 Method describe_by_name failed. Table ABAP definition is inconsistent. 3 Creation of dynamic structure failed. Create a message in SV-BO-DB. 4 Open SQL failed on pool or cluster table. 5 / Database Database table does not exist. table not found. 98 The maximum time allowed defined by profile parameter “rdisp/max_wprun_time” was reached during collection before a minimum of 10.000 records could be read. This error cannot be corrected. Size the erroneous table separately and add it to the sizing result. 99 / Check An uncatchable error occurred. A dump (i.e. a time-out) should be visible in ST22. ST22 When encountering, such errors the program retries to process the table in a new RFC call and might be able to reprocess it correctly. However, if it fails once more, a second identical dump should be visible in ST22 and the table will be logged with error 99. If the table has a significant size and you cannot fix the issue yourself create a support message in SV-BO-DB. Check also the below error table for a list of common termination. Sampling Unexpectedly, no data could be read from a table. Either there is really no data in error / which case the error can be ignored; either there was a problem with the sampling Sampling SQL. It can sometimes be fixed by increasing the sample size. Only analyze this unsuccessful further if the table has a significant size (>50GB on disk) on the analyzed database. Open a message in SV-BO-DB if the table size is relevant for your sizing. DB2 DBSL Those errors are occurring only on table of type POOL. It is due to old internal DB2 Error. Check DBSL architecture. That architectural problem is solved with the kernel version 7.49 FAQ associated with NetWeaver 7.51. document attached to The error cannot be corrected without a kernel upgrade. However, in the large SAP Note majority of cases, the pool tables are small and irrelevant to the sizing. If the size of 1872170 the tables is below 50GB on disk in DB2 then the errors should be ignored. Since version 47 of the report, the error text should be self-explanatory and most error codes are no longer displayed. TERMINATION NAME (ST22) Comment MESSAGE_TYPE_X This termination happens if more than 20 RFC calls encountered an error. To find the root cause check transaction ST22 and check the termination that occurred before the MESSAGE_TYPE_X error. TIME_OUT Not enough time is available for the processing of RFCs. Check the value of profile parameter “rdisp/max_wprun_time” or decrease the sample size. CONNE_CONTAINER_TOO_SHORT This termination is due to an incorrect installation of the Note. Check the import and changing parameters of function module Z_COLLECT_STATS. IMPORT_ALIGNMENT_MISMATCH Implementation error. Make sure the report was installed correctly. If you use /SDF/HDB_SIZING make sure the same ST- PI Support Package is used everywhere in the transport chain. 9 THE SIZING RESULT IS HIGHER THAN EXPECTED, WHAT COULD BE THE REASON? Typically, the compression factor achieved is poor with: - Small systems. If the analyzed database is smaller than 500GB, the overall compression factor (Disk size / estimated memory consumption) will not be impressive as Suite on HANA requires a fixed size of 50GB to operate. - Large cluster tables (i.e. CDCLS, KOCLU, EDI40…). When migrating to Suite on HANA, cluster tables are converted to transparent tables. One record in a cluster tables will become N records in the transparent table. N can be as large as 100. This means the transparent tables is much bigger than the cluster table. If the cluster tables are large in your database, then it will impact the overall estimated compression factor. 10 HOW TO INTERPRET THE RESULTS OF THE SIZING REPORT? 10.1 SIZING RESULT The report display results in a top-down manner. In the first table, you will find the total sizing for Suite on HANA (or S/4HANA if selected) and the sizing that can be achieved by implementing memory data footprint reduction that are made possible by Suite on HANA or S/4 HANA. Below is a report output example in version 55. The total estimated memory requirement given by the report should not be considered as the final memory sizing result. You must still take in account other aspects: - Not all the server memory will be available to HANA. The sizing must take in account the global allocation limit. - There should be enough space left for future data growth. - Always keep in mind that the report is making estimations not an exact forecast. Note: In some cases, other lines can appear in the upper table such as special sizing for PCLx tables, Live Cache, etc… These additional memory requirements must not always be considered on top of total sizing. - Other possible additional memory requirement during the transition to S/4 HANA. This is the memory requirement for the conversion of table KONV to PRCD_ELEMENTS. It corresponds to the estimated size of one partition of KONV multiplied by 2 to consider for work space. This requirement does not come on top of the total sizing as the transition to HANA is done during downtime so we can consider that not all data need to be loaded in memory at the same time. - Other possible additional memory requirement for an upgrade of the shadow instance. During upgrade, a shadow upgrade instance is created by copying a selection of basis tables. This creates additional memory consumption. This requirement does not come on top of the total sizing as upgrade are performed during downtime so we can consider that not all data need to be loaded in memory at the same time. - Other possible additional memory requirement for the de-clustering of PCLx tables as per Note 1774918. If you chose to implement Note 1774918, you must consider this memory requirement on top of your memory sizing. The implementation of Note 1774918 is optional. - Other possible additional memory requirement for the Live Cache. With Suite on HANA the Live Cache can be integrated directly in HANA. If you chose to do so, you must consider this memory consumption estimation on top of the total sizing. 10.2 SIZING CALCULATION DETAILS In the tables called “MEMORY/DISK SIZING CALCULATION DETAILS”, the calculations of the total sizing estimations are detailed. The sub-totals called “Work space” and “Fixed size” estimates the size of all non-data components necessary for running SAP HANA. For example: • Columns created in the column store by the join engine • Space required for delta stores • Space required for the multi-versioning mechanism (MVCC). • Space required for merge operations • Tables not considered by the report (Tables existing in the database but not in the ABAP dictionary) • Space required for query execution (ex: Translation tables, temporary tables…) • Result cache • Row store memory fragmentation • Space required for the metadata catalog, nameserver, statistics server… • Space required for SQL Cursor caches • Space required for session statistics • Persisted Pages 10.3 CLEAN-UP DETAILS After transitioning to S/4HANA or installing Suite on HANA, some data footprint reduction possibilities are enabled. These are estimated and summarized in the tables called “DISK/MEMORY SIZING AFTER CLEAN UP” The details of these estimation is available in the table below called “MEMORY SIZING CALCULATION DETAILS” 10.4 CLEAN UP CALCULATION DETAILS The anticipated requirement after clean up shows the memory consumption that can be achieved if some data volume management projects are implemented on the system. Below are explanation on common clean up potentials. Data aging on infrastructure tables Data aging offers you the option of moving large amounts of data to disk to free up memory. More information is available in the help pages and in this best practice document. The sizing report is estimating the size that can be paged to disk for objects CHANGEDOCU, BC_IDOC, WORKFLOW and BC_SBAL. In the detail tables below you can see some of the tables concerned. The data aging functionality creates a partitioning of data where “current” records are in a partition loaded to memory and all other “historical” records are written in partitions that are on disk. The number above shows the size of data that can be moved to disk. This concept is sometimes referred as “cold” and “hot” data. Only the “hot” data in these tables is used and therefore resides in memory. The access to cold data is still possible but with performance penalty. Data aging objects are available for other tables but since the aging on infrastructure tables such as Idocs and Application Logs is simple to implement and does not require a large project effort, it is made visible in the sizing report. Using HANA 1.0 SPS 10 and above is recommended before activating data aging. Note that data aging is available for other objects such as financial documents. There are therefore other reduction possibilities not shown in this section of the report. A table with the names of the available data aging objects is available at the end of the sizing report. For example: If the report is running on a HANA system where aging is already running, the date of the last aging run will be visible in the third column. Deletion possible after an upgrade to SAP HANA Finance If you have selected the S/4 HANA sizing (or S/4 HANA Finance), the report will show some data footprint reduction potentials that become possible. 1) Deletion of obsolete CO documents. 2) Phase-out of FI-SL and corresponding work space. 3) Phase-out of EC-PCA and corresponding work space. Deletion of actuals in COEP and corresponding work space During the migration to S/4 Finance, actuals are moved from CO tables to the new table ACDOCA but they are not deleted from their original tables for troubleshooting reasons. These redundant records (and others) can be moved out of the database in a second step using SAP Note 2190137. Phase-out of FI-SL and corresponding work space Special Ledgers tables can in some cases be phased out once S/4 HANA Finance is installed. Whether this is possible can only be decided on project level. The report shows the most optimistic amount of memory that can be saved if all FI-SL tables in the system can be phased out. Phase-out of EC-PCA and corresponding work space Profit Center Accounting becomes obsolete with S/4 HANA Finance. However, it requires a dedicated project to phase it out from your system. The report shows the amount of memory that can be saved. 10.5 SIZING OF THE UPGRADE SHADOW INSTANCE Starting with version 59 of the report, a table with the upgrade shadow instance sizing information is now displayed. During an upgrade, a shadow instance is created by cloning some basis tables, this can lead to an increase in the total memory requirement. You have to decide whether this comes as an additional requirement on top of the total sizing or not. If you plan to perform the upgrade during low activity period (i.e. Weekends, nights or complete downtime…) then this additional memory usage should not be added to the total sizing. However, if you plan to have normal workload in parallel to the creation of the shadow instance, you might want to consider this requirement on top the total system sizing. 10.6 STATE OF ARCHIVING Finally, at the end of the report a table is available that shows the current coverage of archiving in the system. Using this table, you can identify what are the largest estimated HANA objects that might require archiving. 11 WHAT TO DO IF ONCE MIGRATED TO HANA THE MEMORY CONSUMPTION IS DIFFERENT FROM THE ESTIMATED ONE? The sizing report is doing an estimation and cannot gives a 100% accurate estimation of the memory consumption in HANA. Once you have migrated to HANA, the most reliable way to verify the sizing is to run the sizing report on the HANA system. The sizing report will then read the current actual memory usage of each tables and provide the current sizing for that system. When comparing this sizing to the sizing estimation you made on the previous database, you must keep in mind the following points: - Data growth might have happened between the moment the report was executed on the source database and the migration. Check the record count to calculate how much that has influenced the current sizing. - It is possible that some objects are existing in your HANA database and were not expected by the sizing report. The most common case is with concatenated secondary non-unique keys. These keys should be deleted during the migration but sometimes are not (Especially indexes created by customers). Check if these indexes can be deleted. Starting with version 59 of the report, these attributes are detected, displayed in the report and recommended for deletion. - If the system had just been migrated and is not yet used productively, it is possible that the compression used by HANA is not yet optimal. Usually, a mechanism in HANA identifies if a better compression algorithm can be used. But if the system is mostly idle and merges are not triggered, it is possible that this optimization is not yet started. If a table is significantly larger than expected, you can force an optimization of the compression with this SQL: UPDATE "<your_table>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION'='FORCED'). Starting with version 60, the sizing report can start compression runs itself if it is identified as necessary. - If your intention is to check the accuracy of the sizing, make sure the report is started on HANA with the option “Size also data that is currently unloaded”. Because when running on the source database, the sizing report assumed that everything will be loaded to memory. In case, the size in HANA deviates noticeably from the sizing made on the source database, create a message in component SV-BO so that SAP can help you identifying the reasons for the difference. 12 WHAT ARE LOBS AND HYBRID LOBS? Large OBject is a data type used to store large record. Typically, it is texts, long texts, images, audio, large XML files, Office documents, etc... Some types of LOB are also holding binary executable code such as ABAP code. The exact definition of a LOB varies from one database to another. Typically, in the SAP Business Suite, application records are rarely stored in LOB. Data in LOB are rather infrastructure data such as ABAP code, xml, some type of logs, spool outputs, etc… Often, LOB data cannot be well compressed. To avoid unnecessary usage of memory resources, the Hybrid LOB concept has been introduced with SAP HANA SPS7. By default, columns of data type LOB are no longer fully loaded to memory on first access. Instead, all records with a size superior to 1000 bytes (default size) are stored on disk in a page chain. And they are loaded to memory page by page only when explicitly requested. For sizing, the assumption is made by the report that 20% of the data stored on disk can be loaded to memory at the same time. With HANA 2.0, the hybrid LOB functionality has been optimized in order to minimize the disk space footprint. This optimization, namely MidSizeLOB, is taken in account by the sizing report since version 67. In the report output, a list of column store or row store tables with largest hybrid lob is available. See below for an example. You can find more information regarding LOBs in HANA in SAP Note 2220627. 13 HOW TO DO DISK SIZING FOR HANA? The sizing report is the entry point for disk sizing. Use the value referred as “net data size on disk” and check the sizing guidelines available at http://scn.sap.com/docs/DOC-62595 to complete the disk sizing. The term “Net data size on disk” is NOT the anticipated total disk sizing for your HANA installation. It is only the starting point of the disk sizing. 14 WHY IS THE RESULT SLIGHTLY DIFFERENT BETWEEN TWO SUCCESSIVE RUNS? Except when ran on SAP MaxDB and HANA, the report is using randomization for data sampling. It means that the calculated average length of a field might be slightly different between one run and the next one because different random data has been read. It leads to small differences between consecutive sizing runs on the same selection. This deviation will be much smaller if you choose bigger samples in the input screen. 15 DOES THE REPORT CONSIDER ALL TABLES RELEVANT TO SIZING? No. Some tables are ignored by the report. This is the case of tables existing in the database but not defined in the ABAP dictionary or defined but inactive. The most important tables with significant content are DDNTT, DDNTT_CONV_UC and DDNTT_HIST. Their sizes should however be relatively small and they can generally be ignored for the sizing. Take also in account the fact that some tables that do not exist in the source system might be added to SAP HANA after your migration. For example, you might want to use SAP HANA reporting capabilities and keep historic data that were previously sent to a data warehouse such as SAP BW. Such decisions will impact HDB sizing. 16 DOES THE SIZING RESULT DEPEND ON THE COMPRESSION TECHNOLOGY USED IN THE SOURCE SYSTEM OR WHETHER THE SYSTEM IS UNICODE OR NON-UNICODE? The report is basing its calculation on metrics that are untouched by compression such as the number of rows and the uncompressed length of fields. Same applies to non-Unicode system. There is no need to apply an additional factor to the result of the report. In other words, the compression technology and the Unicode conversion are taken in account by the sizing results. 17 DOES THE REPORT CONSIDER THAT SOME SECONDARY INDEXES WILL BE DELETED WHEN MIGRATING TO HDB? Yes, the report uses an up-to-date list of indexes that will be kept, deleted or created for HDB. 18 HOW DOES THE SIZING REPORT KNOW WHETHER A TABLE WILL BE IN THE ROW STORE OR THE COLUMN STORE? The location of the data in the column or row store is very important for the sizing as the compression achieved is very different between the two stores. If the report runs on SAP_BASIS 740, it will use the distribution as defined in the ABAP dictionary. If the support package you will use with HANA is the same as the current one where the report is running, you then have nothing to do. If you are running the report on earlier SAP_BASIS version, the sizing report uses the distribution available in the standard delivery of SAP_BASIS 740 SP08/09. If you plan to differ from this distribution, you should use the select-option available in the entry screen to specify what changes you plan to make. All tables in the customer namespace are expected to be located in the column store. In the example below, you plan to use SP_BASIS 740 SP06 and you run the report on SAP_BASIS 620. Since the report will by default use the Row Store list of 740 SP08, you have to explicitly inform the sizing report of the changes to the Row Store list between SP06 and SP08. These changes are detailed in SAP Note 1850112. According to SAP Note 1850112, the changes between SP06 and SP08 are the following: “In 7.40 Support Package 8, the storage type has been changed to "Column Store" for the following tables: AQDB AQGDB AQLDB AQRDB AQSGD AQSLD AQTDB BF4INDX TAQTS CUSTCONT1 DOKIL HLPINDX IWB0CONT1 IWB1CONT1 IWB2CONT1 IWEXINDX QMM_CONT1…..” In total there are 51 tables moved to Column Store. Since you are upgrading to SP06, these tables will still be in the Row Store in your system. Inform the report as pictured below: Note 1: The tables added to the select-option in tab “Changes to standard stores distribution” must also be part of the selection done in the select-option “List of tables” (it can also be left blank) otherwise they will be ignored. Note 2: Very large Row Store tables with more than 500.000.000 records are automatically moved to the Column Store during the migration to HANA for memory consumption reasons and performance reasons. The report is taking this limitation in account and unless you force the choice of store via the select-option, such tables will be estimated as Column Store tables. 19 HOW TO SIZE AN SOH SYSTEM ONCE THE MIGRATION IS DONE? Refer to SAP Note 1698281 “Assess the memory consumption of a SAP HANA System”. And use the python script memorySizing.py. Note that this script is only calculating the memory consumption of data. To include the necessary work space, multiply the result by 2 and add 50GB. Starting with version 45 of the report, it is possible to run the sizing report directly on the HANA system. It will size your system basing its calculation on the real size of data in memory and with this, will estimate the additional work space necessary. Note that the sizing report is only sizing tables belonging to the ABAP schema. If you have a significant amount of data outside this schema, you should do the addition manually or use the python script memorySizing.py available in the HANA binaries. 20 DOES THE REPORT TAKE IN ACCOUNT CHANGES IN THE DATA MODEL INTRODUCED BY SUITE ON HANA? Starting with version V54, the sizing report shows some possibility to decrease the memory footprint using data model optimization such as the deletion of table VBOX (Rebate index). Not all available possibilities are taken in account by the report. 21 WHAT ARE TABLES PARTIALLY READ? The report can detect if a timeout is about to occur. If this is the case, it stops the data sampling and gives a result based on the sample data that could be read so far. The size of the data sample that could be read within the allowed time is given in a result table in the spool output. If you do not see such table, all sample data could be read in time. Note: If there are too many partial results, increase the timeout profile parameter rdisp/max_wprun_time via transaction RZ11 and execute the report once again. 22 NOTE REGARDING THE LIST OF LARGEST TABLES The report gives a list of the largest tables in the column and row stores. The size given is the data size in GB. This means it only shows the size of the table once loaded for the first time in memory. It does not include the relevant other components listed in question 10 (delta store for example). Note: The list of top tables in the row store contains usually good candidates for data reorganization. Refer to SAP Note 706478 for a list of tables with data cleaning potential. Refer also to the data management documentation. 23 WHY IS THE REPORT MAKING A DIFFERENCE BETWEEN TABLES AND KEYS? SAP HANA has different compression ratio whether a column is a regular column, a primary or secondary unique key, whether a column is in the row store or in the column store, a LOB etc... A drill-down is therefore given in the report output to easily identify deviations between the sizing and the reality. These figures are for information only and should not be considered for your sizing. 24 WHY IS THE REPORT NOT SIZING THE LIVECACHE? When executing the sizing report on an SCM system, a check is performed to verify if a Live Cache installation is available. Only if the Live Cache is installed on MaxDB and if SAP Note 1956837 has been implemented, a sizing of the Live Cache will be performed. You can size your Live Cache installation using the function module of SAP Note 1956837 without installing ZNEWHDB_SIZE or /SDF/HDB_SIZING. 25 WHERE CAN I FIND THE RESULT OF THE REPORT? The result of the report is available in the spool output of the job you started. The detailed result can also be persisted in your database. To achieve this, there are 2 options: - With ST-PI 2008_1_[620-710] SP09 and ST-PI 740 SP00 and above, the result of the last run is stored on column level in tables /SDF/CSSIZING and /SDF/RSSIZING. For the column store, the column /SDF/CSSIZING-EST_MS_MAIN contains the estimated memory size in HANA of the column. In case the report runs in HANA, the real size is stored in column /SDF/CSSIZING-MS_MAIN. For the row store, the estimated memory size is stored in /SDF/RSSIZING-EST_MS_TOTAL. In case the report runs on a HANA database the memory size is the sum of /SDF/RSSIZING-VARIABLE_SIZE and /SDF/RSSIZING-FIXED_SIZE. The 2 tables are only filled if the report is called with hidden parameter p_db. To activate this parameter, type “db” in the transaction box of the selection screen as pictured below. Alternatively, you can use the API /SDF/TRIGGER_HDB_SIZING, this API will start the report with the p_db parameter set to true. Note that table /SDF/CSSIZING and /SDF/RSSIZING are only storing the result of the latest sizing. No history information is kept. A sizing run overwrites the results of the previous sizing run. The information from these tables can be read using function module /SDF/READ_HDB_SIZING_RESULTS. - With ST-PI 2008_1_[700-710] SP12 and ST-PI 740 SP02 and above, the information is stored also in tables /SDF/HDBTABSIZES and /SDF/HDBSIZING. In table /SDF/HDBSIZING, you can find the general information of the sizing run such as total sizing, date, version of report used, etc… In /SDF/HDBTABSIZES, the sizing result is stored on table level. The field /SDF/HDBTABSIZES-DATA_SIZE stores the memory size independently of the fact that this number is read on a HANA database or estimated from another database. These tables keep history information. With these support packages, it is still possible to write the information in tables /SDF/CSSIZING and /SDF/RSSIZING by using the hidden parameter p_db as described above. The information from all these tables can be read using function module /SDF/READ_HDB_SIZING_RESULTS_2. Disclaimer: The output of this tool is given in hardware independent categories and is not aligned with customer- specific needs or conditions. Therefore, SAP assumes no responsibility for errors or omissions of the output provided herein. The customer is responsible for verifying any output and deciding on whether to implement any of the recommendations made by SAP herein. SAP makes no warranties or representations with respect to the content hereof and specifically disclaims any implied warranties of fitness for any particular purpose. In no event shall SAP be liable to You for any direct damages or for any lost profits, lost savings or any other incidental, indirect, punitive or consequential damages ARISING FROM THE USE OF THE RESPECTIVE SOFTWARE, even if advised or aware of the possibility of such damages. THIS LIMITATION OF LIABILITY SHALL NOT APPLY IN CASES OF WILLFUL MISCONDUCT OR ANY LIABILITIES ACCORDING TO STATUTORY PRODUCT LIABILITY LAWS. SAP makes no commitment to keep the information contained herein up to date. This tool is based on the most current level of the SAP solution and will be updated if applicable. This tool is copyrighted by SAP, all rights reserved. No parts of this program may be reproduced, transmitted or copied in any form or for any purpose without the express permission of SAP. You may use the tool only for the purposes as described above. Any other use is strictly prohibited. The information contained in this application is subject to change without notice. SAP reserves the right to make any such changes without obligation to notify any person of such revision or changes. You may provide feedback to SAP concerning the tool, SAP software or any other SAP data or processes under the following conditions: For any invention included in feedback, you grant SAP and its related companies a license to make, have made, use, lease, sell, offer for sale, import, export or otherwise transfer any apparatus or product through all of its distribution channels and to practice any method, covered by the invention, and to sublicense others to do any or all of the foregoing, unless otherwise agreed between you and SAP. “Feedback,” for the purposes of this agreement, means information and materials provided by you following the disclosure of the tool, SAP software or any other SAP data or processes and which relates directly to the design and performance of SAP software and/or other SAP products and materials. Feedback does not include any other information and materials disclosed herein. SAP will adhere to the applicable data protection regulations, if and when any personal data is exchanged.
Copyright © 2024 DOKUMEN.SITE Inc.