Inventory Data LoadingThis document explains step by step Inventory loading to 0IC_C03 Cube. Inventory Scenario Is it required to load data using 2LIS_03_BX data source to get initialize stock opening balances...!! 2LIS_03_BX --> This structure is used to extract the stock data from MM Inventory Management for initialization to a BW system. 2LIS_03_BF --> This structure is used to extract the material movement data from MM Inventory Management (MM-IM) consistently to a BW system 2LIS_03_UM --> This structure is used to extract the revaluation data from MM Inventory Management (MMIM) consistently to a BW system. Contains only value changes, no quantity changes. Before you can extract data for revaluations to a BW system, you must ensure that the transaction/event key is active. For this, see the following SAP Notes: · 353042: How To: Activate transaction key (PROCESSKEY) · 315880: Missing event control after PI-A installation Usage of BX datasource is depends on business requirement. Requirement1: For example data getting pulled from R/3 system. It contains 10 years of History Data. Business requirement is want to see Inventory / Stocks data for last 2 years only. In this case, to get opening balances can only get through BX datasource and compress WITH Maker update. after that we can load only last 2 years of historic data using BF & UM datasources and compress WITHOUT Marker update. Requirement2: For example data getting pulled from R/3 system. It contains 2 years of History Data only andentire data needed from reporting. In this case, we can use BX to pull opening stock balances and compress WITH Marker update and Pull BF & UM for 2 years of historic data and compress WITHOUT Marker update. Same as above. OR Just Pull entire historic data - stock movements (2 years) using BF and UM datasources and compress WITH Marker update . It creates Opening Balances. In this case no need to Load BX datasource. How To… Handle Inventory Management Scenarios in BW:https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0 1) First thing we need lock the all users 2) In Init load we should maintain given serialization First 2LIS_03_BX,2LIS_03_BF and then 2LIS_03_UM with following given markerupdates. August 2010 1 3) In delta load we should load 2LIS_03_BF first then 2LIS_03_UM. 4) Every delta load should be compress with marker update. Inventory data loading step by step procedure 1)2LIS_03_BX(Stock Initialization for Inventory Management Deleting the setup tables using Tcode: LBWG Select the inventory controlling application component Then Click on execute button August 2010 2 Setup Tables: Its place where BW Come and pick the Init Data load To fill the setup tables of 2LIS_03_BX using Tocde: MCNB in transfer structur tap Select the 2LIS_03_BX Change the Time of termination and click on Execute button After filling the setup tables go to Tcode: RSA3 Enter the datasource name execute and check weather data got filled in setup tables. RSA3 Showing data means your setup tables got filled. August 2010 3 Login into BI Side Select and expand the tree of 0IC_C03 (2LIS_03_BX) Trigger the Infopackage for the data load upto PSA and then Trigger the DTP to load the from PSA to 0IC_C03. Make sure DTP Should contain the following settings . Check the data successfully got loaded into cube. Go to Manage screen Click on Collapse tab and make sure No Marker Update should not be checked, enter the request id and click on Release. Now data got compress with marker update 2LIS_03_BF: Goods Movements From Inventory Management To fill the setup tables of 2LIS_03_BF using Tocde: OLI1BW Enter the Name of run Change the Termination time and click on Execute button. August 2010 4 August 2010 5 . RSA3 Showing data means your setup tables got filled. Login into BI Side Select and expand the tree of 0IC_C03 Select the infopackage of (2LIS_03_BF) and Trigger the Infopackage to load the data upto PSA and then Trigger the DTP to load the data from PSA to ZIC_C03.After filling the setup tables go to Tcode: RSA3 Enter the datasource name execute and check weather data got filled in setup tables. Company code Change the Termination time and click on Execute button.Check the data successfully got loaded into cube. enter the request id and click on Release button. Now data got compress without marker update Revaluations: 2LIS_03_UM To fill the setup tables of 2LIS_03_UM Enter the Tocde: OLIZBW Give the Name of run. Go to Manage screen Click on Collapse tab and make sure No Marker Update should be checked. August 2010 6 . August 2010 7 . In RSA3 Showing data means your setup tables got filled.After filling the setup tables go to Tcode: RSA3 Enter the datasource name execute and check weather data got filled in setup tables. Login into BI Side Select and expand the tree of 0IC_C03 Select the Init infopackage of (2LIS_03_UM) and Trigger the Infopackage to load the data upto PSA and then Trigger the DTP to load the data from PSA to ZIC_C03. enter the request id and click on Release button. Go to Manage screen Click on Collapse tab and make sure No Marker Update should be checked. Inventory Delta Load 2LIS_03_BF Go to Tcode: LBWE then Click on Job Control Tab August 2010 8 . Now data got compress without marker update.Check the data successfully got loaded into cube. 1)Click on Start Date you will get following screen enter the date and time to trigger the delta load from lbwq to RSA7. Then click on Print Parameters August 2010 9 . Login into BI Side Select and expand the tree of 0IC_C03 Select the Delta infopackage of (2LIS_03_BF) and Trigger the Infopackage to load the data upto PSA and then Trigger the DTP to load the data from PSA to ZIC_C03.After entering print parameters Click on schedule job. August 2010 10 . Now data got compress with marker update. Check the data successfully got loaded into cube. Now data got compress with marker update. enter the request id and click on Release button.Check the data successfully got loaded into cube. enter the request id and click on Release button. Go to Manage screen Click on Collapse tab and make sure No Marker Update should not be checked. August 2010 11 . Inventory Delta Load: 2LIS_03_UM Select and expand the tree of 0IC_C03 Select the Delta infopackage of (2LIS_03_UM) and Trigger the Infopackage to load the data upto PSA and then trigger the DTP to load the data from PSA to ZIC_C03. Go to Manage screen Click on Collapse tab and make sure No Marker Update should not be checked. As such. transformation files now support the use of two new parameter options. where the user is allow to write custom ABAP code to process the data to specific requirements. As of Release 7. Transformation Files and Conversion files are used to aid in the manipulation of the imported data before it is written to the database. we will implement code which will retrieve a property value from a custom database table and updates the BPC master data accordingly. As of BPC 7. and before the transformation or conversion file logic is applied to it. Background Information SAP Business Add-Ins (BAdIs) are one of the most important technologies used to adapt SAP software to specific requirements. Business Scenario SAP Business Objects Planning and Consolidation. BAdI calls can be integrated into customer applications (like BusinessObjects Planning and Consolidation.5. BAdIs are the basis for Object Plug-Ins that can enhance the functions in ABAP programs without having to make core software modifications. we will demonstrate how to implement both.0 of the SAP NetWeaver Application Server ABAP. For the START_ROUTINE BAdI implementation. 2.1. version for Netweaver (herein referred to as “BPC”) Data Manager is designed to help the user to move data into and out of BPC. These parameters are used to trigger BAdI implementations on the backend ABAP system. August 2010 12 . version for SAP NetWeaver for example) to allow enhanced customization of standard application functionality. where they represent explicit enhancement options. BAdIs are part of the Enhancement Framework. In this guide. known as START_ROUTINE and END_ROUTINE. For the END_ROUTINE BAdI implementation. and just before the data is committed to the database. The END_ROUTINE is called after all processing has been done. we will implement a check which will ensure that an imported property value length is not greater than a certain length. The START_ROUTINE is called after the data has been read from the source. The Step By Step section will outline the steps needed in order to create the BAdI itself in addition to the configuration required within BPC to actually execute the BAdI.5.sap. This code is only meant as an example and while it will perform the actions described in this guide it may not match the exact needs of your own particular Business Scenario – it is only intended to guide you in the creation of your own BAdI Implementation 3.com/rkt On the left hand side.sdn. SE19.sap. STMS Additional Documentation x RKT Online Knowledge Product { http://service. SP00 and higher x ABAP programming skills x Access to SAP NetWeaver transaction codes: SE20. Prerequisites Required/recommended expertise or prior knowledge x SAP BusinessObjects Planning and Consolidation 7.5.com/wiki/display/BPX/Enterprise+Performance+Management+%28EP M%29+How-to+Guides x SAP Help Library – Business Add Ins { http://help.com/saphelp_nw70/helpdata/en/8f/f2e540f8648431e10000000a1550b0/fr ameset August 2010 13 . SE24.This How-To Guide (HTG) describes the procedure for creating and configuring the Data Manager Start Routine and End Routine BAdIs within BPC. SE38. version for SAP NetWeaver x Other EPM How-To Guides { http://wiki. The Appendix section contains the example ABAP code that goes along with this guide’s Business Scenario.sap. SE18. navigate to SAP Ramp-Up Knowledge Transfer -> SAP BusinessObjects EPM Solutions -> SAP BO PC 7. SE80. version for SAP NetWeaver. This filter condition can be used to execute the BAdI implementation at runtime. Step-by-Step Procedure A BAdI implementation is the term used in the Enhancement Framework for an enhancement implementation element. 2. enter the name of the corresponding enhancement spot. A BAdI implementation consists of a BAdI implementation class that implements the BAdI interface. Log on to the NetWeaver system via SAPgui. The BAdI implementation also contains a filter condition which is specified in the BAdI definition. August 2010 14 . 4.1 Create the Start Routine BAdI Implementation 1.4. Enter transaction SE18 and press “Enter“. Enter UJD_ROUTINE. In the initial screen. and click “Display“. August 2010 15 . On the left side of the screen.3. expand the BAdI definition tree by clicking on the see the following nodes. Right-click on the “Implementations” node. icon. You should then x Interface x Filter x Implementations 4. and choose “Create BAdI Implementation”. 6. Name it as ZUJ_DM_ROUTINE_EX. If an enhancement implementation already exists. continue to step 6. In some cases. In this dialog. If this dialog is not displayed. 7. Enter the name of a package name for transporting this BAdI to another system in your landscape. a dialog listing all implementations will be displayed.5. enter the name of the enhancement implementation and the short description. Click the “Create” button in the lower right hand corner. August 2010 16 . or click “Local Object” if you do not plan to transport this BAdI. a developer may have already created an enhancement implementation for this enhancement spot for a different BAdI definition. Then click the green check to continue. then select your enhancement implementation which was just created in step 6. If you are presented with this dialog. and enter the description. and click the green check. August 2010 17 . 9. go directly to the dialog box shown in step 9. Also. Enter the name of a package name for transporting this BAdI to another system in your landscape. enter the name of the BAdI Implementation as ZUJ_START_ROUTINE_EX_IMP. 10. or click “Local Object” if you do not plan to transport this BAdI. In this dialog. enter the name of the implementing class as ZCL_UJD_START_ROUTINE_EX. If there are no other previous implementations.8. Notice it is not yet active. The BAdI Implementation will then be saved. Click on the below. icon next to the name of the BAdI Implementation. This will expose the following nodes x Implementing Class x Filter Values August 2010 18 .11. 12. Next double click on the BADIIMPL_NAME line of the combination.13. Click on the “Combination” button from the filter values screen. 15. 16. Click the “Change” icon.” Node. Double Click on the “Filter Val. August 2010 19 . 14. Finally click the “Green Check” to continue. save and activate by clicking the appropriate buttons. Enter the name of the BAdI implementation into the “Value 1” field. This is the value which will be passed in the transformation file later on. we will use ZUJ_START_ROUTINE_EX_IMP for consistancy. August 2010 20 . Next set the drop down box for “Comparator 1” to “=”. In this example. Finally. 18. but this value could be anything as long as this name and the name used in the transformation file are the same .17. All objects should then be active. Double click on the “Implementing Class” node from the left side of the screen. 21. first click the “Select All” button. the code which will be executed by the BAdI implementation can be inserted into the implementing class. In the following dialog. 20. Finally. August 2010 21 . Now double-click on the implementing class name.19. and then the “Green Check” button. the implementing class is displayed in the class builder tool. Click on the “Change” icon. Due to forward navigation. August 2010 22 . If the method signature is not displayed. the method signature is displayed at the top.22. An empty method implementation will be shown. Then double-click on the RUNROUTINE method. click the “Signature” button on the application toolbar. Notice. 23. Also. you will need to convert this comma delimited format to a structured internal table in order to work with the data efficiently.24. Copy and paste the source code from Appendix 5. this code will check the value of the PROFIT_CENTER column and ensure that its length is not greater than 10. August 2010 23 . This example code shows how to do this conversion. For the Start Routine BAdI. In most cases. the IR_DATA parameter contains the importing data in a comma delimited format.1 into the RUNROUTINE method. 25. log on to the NetWeaver system via SAPgui. August 2010 24 . Enter UJD_ROUTINE. 4. 2. enter the name of the corresponding enhancement spot. Once again.2 Create the End Routine BAdI Implementation 1. and click “Display“. Save and activate the class by clicking the appropriate buttons. Enter transaction SE18 and press “Enter“. In the initial screen. On the left side of the screen. Right-click on the “Implementations” node.3. expand the BAdI definition tree by clicking on the see the following nodes. August 2010 25 . and choose “Create BAdI Implementation”. You should then x Interface x Filter x Implementations 4. icon. enter the name of the implementing class as ZCL_UJD_END_ROUTINE_EX. 7. or click “Local Object” if you do not plan to transport this BAdI. Enter the name of a package name for transporting this BAdI to another system in your landscape.5. August 2010 26 . and enter the description.1 6. In this dialog. Select the enhancement implementation which was created in section 4. enter the name of the BAdI Implementation as ZUJ_END_ROUTINE_EX_IMP. Also. 8. Click on the below. Double Click on the “Filter Val. Notice it is not yet active.” Node. 9. The BAdI Implementation will then be saved. This will expose the following nodes x Implementing Class x Filter Values 10. August 2010 27 . icon next to the name of the BAdI Implementation. Click on the “Combination” button from the filter values screen. This is the value which will be passed in the transformation file later on. Finally click the “Green Check” to continue. August 2010 28 . Click the “Change” icon. but this value could be anything as long as this name and the name used in the transformation file are the same . Next double click on the BADIIMPL_NAME line of the combination. 12. Next set the drop down box for “Comparator 1” to “=”.11. we will use ZUJ_END_ROUTINE_EX_IMP for consistancy. 14. In this example. Enter the name of the BAdI implementation into the “Value 1” field. 13. 16. August 2010 29 . All objects should then be active. Finally. first click the “Select All” button. In the following dialog. and then the “Green Check” button.15. save and activate by clicking the appropriate buttons. Due to forward navigation. Click on the “Change” icon. Now double-click on the implementing class name. 19. Double click on the “Implementing Class” node from the left side of the screen. Then double-click on the RUNROUTINE method. the code which will be executed by the BAdI implementation can be inserted into the implementing class. the implementing class is displayed in the class builder tool. 18.17. Finally. August 2010 30 . you do not have to worry about converting the comma delimited format to a structured internal table.2 into the RUNROUTINE method. 21. Notice. August 2010 31 . In this implementation. unlike the Start Routine BAdI. the value for ECC_CC will come from a custom table ZECC_CC based on the ID value. and is only used as an example in this guide. For the End Routine BAdI. the method signature is displayed at the top. If the method signature is not displayed. the IR_DATA parameter contains the importing data in a structured format. Copy and paste the source code from Appendix 5. click the “Signature” button on the application toolbar. So. You will need to implement your own data retrieval logic in that section of code. An empty method implementation will be shown.20. Note: The ZECC_CC table will not exist in your system. August 2010 32 . In this example. 4. this table will not exist in your system.3 Test the BAdI Implementations In this example. The PROFIT_CENTER value length will be checked by the Start Routine BAdI. The source data could potentially come from a number of other sources. so that will not be covered here. and if it is too long. This guide assumes that the reader knows how to add properties to dimensions. It is only used as an example implementation. so in this case. We should expect to see the Spain record updated correctly. The PROFIT_CENTER property value length will be checked by the Start Routine BAdI implementation. Save and activate the class by clicking the appropriate buttons. the value will be retrieved from a custom table. The ECC_CC value will be filled by the End Routine BAdI implementation by retrieving the value from a custom “Z” table based on the ID field. ZECC_CC. We will attempt to load the “Spain” and “Austria” cost center dimension members into the P_CC dimension. We will use the upload file shown below. the P_CC dimension has been modified to include properties. The screenshot below shows the newly created properties and their lengths.22. we would expect the Austria record to be rejected as the value 50000200001 is 11 characters long. it will reject the record. The ECC_CC property value is blank in the file. Note: This “Z” table will not exist in your system. PROFIT_CENTER and ECC_CC. August 2010 33 . and will be filled at runtime by the End Routine BAdI implementation. This file has already been uploaded to the BPC file service. Again. 2. From the action pane. Go to the BPC Excel Client and log on to the application set. a copy of APSHELL called APSHELL_HTG is used. In this example. click “Mange Data”.1. August 2010 34 . Insert a new row into the transformation file under the *OPTIONS section. This is the name that you provided when defining the filter for the BAdI implementation August 2010 35 . Click “Maintain Transformations”. Enter the parameter name STARTROUTINE and assign the value ZUJ_START_ROUTINE_EX_IMP.3. Click “Create New Transformation”. 4. 5. this is the name that you provided when defining the filter for the BAdI implementation. August 2010 36 . Insert another new row into the transformation file under the *OPTIONS section.6. Enter the parameter name ENDROUTINE and assign the value ZUJ_END_ROUTINE_EX_IMP. Again. Validate and save the transformation file by clicking the link in the action pane.xls in the example folder. Save the file as transformation_badi_example. 8. August 2010 37 .7. Close the transformation file and click on the “Manage Data” link from the action pane. 10.9. August 2010 38 . Click on “Run a Packaget”. August 2010 39 . Next. this file has already been uploaded to the BPC file service.txt. Select “ImportMasterData”. Again.xls” which you have just created. for example P_CC_Data. select the transformation file called “transformation_badi_example. and click “Run” 12. select the import file containing the new master data.11. Select the appropriate dimension and click “Finish”. In this dialog. August 2010 40 .13. Click “Ok”. 15. Click the “View Status” button.14. August 2010 41 . The package should now be running ng. Also.16. 17. Once the package is complete. notice that the error raised from the Start Routine shows up in this log. August 2010 42 . you can check the results by selecting the package and clicking the “Detail” button. The detail screen shows that the Start and End Routines has been called successfully. Click on the Rejected Records node.18. 19. From the Admin Console. Spain has been added successfully. The rejected Austria record shows here as well as the reason why it was rejected. you can check to make sure that the records have been created successfully. August 2010 43 . FIELD-SYMBOLS: <lt_data_ex> TYPE STANDARD TABLE. DATA: lo_dataref TYPE REF TO data. FIELD-SYMBOLS: <ls_data> TYPE ANY. DATA: lt_error_reason TYPE uj0_t_message. FIELD-SYMBOLS: <ls_components> TYPE abap_componentdescr. FIELD-SYMBOLS: <ls_column_data> TYPE ANY. FIELD-SYMBOLS: <lv_line> TYPE ANY. FIELD-SYMBOLS: <ls_data_struct> TYPE ANY.5. Appendix 5. DATA: lv_tabix TYPE sy-tabix. FIELD-SYMBOLS: <lt_data_im> TYPE STANDARD TABLE. FIELD-SYMBOLS: <lv_profit_center> TYPE string. DATA: lt_message TYPE uj0_t_message. FIELD-SYMBOLS: <lv_data_field> TYPE ANY. DATA: lo_field_type TYPE REF TO cl_abap_datadescr. DATA: lo_struct_descr TYPE REF TO cl_abap_structdescr. FIELD-SYMBOLS: <lt_data_struct> TYPE STANDARD TABLE. FIELD-SYMBOLS: <ls_message> TYPE uj0_s_message. FIELD-SYMBOLS: <lt_data_er> TYPE STANDARD TABLE. TYPE-POOLS: abap.1 Source Code for Start Routine BAdI Implementation METHOD if_ujd_routine~runroutine. DATA: lt_column_data TYPE TABLE OF string. * Assign importing data reference to field symbol ASSIGN ir_data->* TO <lt_data_im> August 2010 44 . DATA: lt_components TYPE abap_component_tab. DATA: lt_columns TYPE TABLE OF string. FIELD-SYMBOLS: <ls_columns> TYPE string. August 2010 45 . APPEND <ls_data> TO <lt_data_ex>. IF sy-subrc = 0. * Build internal table with true columns from header row. ASSIGN lo_dataref->* TO <lt_data_er>. <ls_components>-name = <ls_columns>. CREATE DATA lo_dataref LIKE TABLE OF <ls_data>. ENDLOOP. APPEND INITIAL LINE TO lt_components ASSIGNING <ls_components>. * Move header row to exporting table. <ls_components>-type = lo_field_type. ASSIGN COMPONENT `LINE` OF STRUCTURE <ls_data> TO <lv_line>.* Create work area for importing data CREATE DATA lo_dataref LIKE LINE OF <lt_data_im>. ASSIGN lo_dataref->* TO <ls_data>. DELETE <lt_data_im> INDEX 1. lo_field_type ?= cl_abap_datadescr=>describe_by_name( `UJ_LARGE_STRING` ). and remove from importing * Store column names in LT_COLUMNS READ TABLE <lt_data_im> ASSIGNING <ls_data> INDEX 1. SPLIT <lv_line> AT `. LOOP AT lt_columns ASSIGNING <ls_columns>. CREATE DATA lo_dataref LIKE TABLE OF <ls_data>. * Use RTTS to describe the structure lo_struct_descr = cl_abap_structdescr=>create( p_components = lt_components p_strict = abap_false ). ASSIGN lo_dataref->* TO <lt_data_ex>. ENDIF. * Create new internal table for exporting data and * error data.` INTO TABLE lt_columns. IF sy-subrc <> 0. <lv_data_field> = <ls_column_data>. * Loop each imported record and read corresponding * record from structured internal table and perform * check on profit center column. IF sy-subrc <> 0. ENDLOOP. ASSIGN lo_dataref->* TO <lt_data_struct> * Fill new structured internal table with data from importing parameter LOOP AT <lt_data_im> ASSIGNING <ls_data>. SPLIT <lv_line> AT `. APPEND INITIAL LINE TO <lt_data_struct> ASSIGNING <ls_data_struct>. * Now you have a internal table with true columns. lv_tabix = sy-tabix. ENDIF. ASSIGN lo_dataref->* TO <ls_data_struct> . ENDLOOP. READ TABLE <lt_data_struct> ASSIGNING <ls_data_struct> INDEX lv_tabix. CONTINUE. ENDIF. length must not be > 10.* Create structure from describer CREATE DATA lo_dataref TYPE HANDLE lo_struct_descr. * Create internal table from structure CREATE DATA lo_dataref LIKE TABLE OF <ls_data_struct> . CONTINUE.` INTO TABLE lt_column_data. ASSIGN COMPONENT sy-tabix OF STRUCTURE <ls_data_struct> TO <lv_data_field>. LOOP AT <lt_data_im> ASSIGNING <ls_data>. LOOP AT lt_column_data ASSIGNING <ls_column_data>. August 2010 46 . ASSIGN COMPONENT `LINE` OF STRUCTURE <ls_data> TO <lv_line>. ENDIF. APPEND <ls_data> TO <lt_data_er>. APPEND <ls_data> TO <lt_data_ex>.* Get value of profit center ASSIGN COMPONENT `PROFIT_CENTER` OF STRUCTURE <ls_data_struct> TO <lv_profit_center>. <ls_message>-msgid = `00`. <ls_message>-recno = lv_tabix + 1. August 2010 47 . add to reject table * otherwise add to exporting table. if > 10. <ls_message>-msgno = `208`. <ls_message>-msgty = `E`. APPEND INITIAL LINE TO lt_error_reason ASSIGNING <ls_message>. ENDIF. CONTINUE. " Account for removed header line <ls_message>-message = cl_uj_utl_message=>get_message_text( i_language = sy-langu is_message = <ls_message> ). <ls_message>-msgv1 = `Profit Center value can not be > 10`. ENDLOOP. IF STRLEN( <lv_profit_center> ) > 10. * Check the length. ELSE. IF sy-subrc <> 0 OR <lv_profit_center> IS INITIAL. <ls_message>-msgv1 = `Error occured during Start Routine BAdI processing`. <ls_message>-msgno = `208`. APPEND INITIAL LINE TO lt_message ASSIGNING <ls_message>. <ls_message>-msgty = `E`. 5. <ls_message>-msgid = `00`. GET REFERENCE OF <lt_data_er> INTO er_error_data.2 Source Code for End Routine BAdI Implementation METHOD if_ujd_routine~runroutine. ENDIF. FIELD-SYMBOLS: <lv_ecc_cc> TYPE string. FIELD-SYMBOLS: <lt_data_ex> TYPE STANDARD TABLE. = lt_message.IF lt_error_reason IS NOT INITIAL. DATA: lo_dataref TYPE REF TO data. FIELD-SYMBOLS: <lt_data_im> TYPE STANDARD TABLE. ENDMETHOD. <ls_message>-message = cl_uj_utl_message=>get_message_text( i_language = sy-langu is_message = <ls_message> ). FIELD-SYMBOLS: <lv_id> TYPE string. August 2010 48 . * Exporting data to exporting data reference et_message lt_error_reason. et_error_reason = GET REFERENCE OF <lt_data_ex> INTO er_data. FIELD-SYMBOLS: <ls_data> TYPE ANY. * Assign importing data reference to field symbol ASSIGN ir_data->* TO <lt_data_im>. but to keep * it simple. IF sy-subrc <> 0 OR <lv_id> IS INITIAL. ASSIGN lo_dataref->* TO <lt_data_ex>. ASSIGN lo_dataref->* TO <ls_data>. * Create work area for importing data CREATE DATA lo_dataref LIKE LINE OF <lt_data_im>. * Create new internal table for exporting data CREATE DATA lo_dataref LIKE TABLE OF <ls_data>. this HTG simply gets a value from a “Z” table. so you must implement * your own data retrieval method here. * Get ECC_CC value from custom table or other datasource LOOP AT <lt_data_im> ASSIGNING <ls_data>. * Get ID value ASSIGN COMPONENT `ID` OF STRUCTURE <ls_data> TO <lv_id>. CONTINUE. This * “Z” table will not exist in your system. ENDIF * Get cost center value from custom table ZECC_CC per ID * and fill field in structure. this value * could potentially come from many other sources. ENDIF. CONTINUE. IF sy-subrc <> 0. then append to exporting table * This is just an example of what can be done here. * Get reference to ECC_CC field ASSIGN COMPONENT `ECC_CC` OF STRUCTURE <ls_data> TO <lv_ecc_cc>. August 2010 49 . Info package load When the info package load is running check the following: Use tcode: RSMON > click on Load Data > check the status. Click on the data source and hit the “manage PSA” button and determine the load status. check the “Error Stack” button and determine if there are any errors.then you know that the data has been completely loaded…when the DTP is used to load data to the DSO you can always check the DTP monitor and determine the status of the load. ENDMETHOD. The same time check SM50 and look at the BGD processes that are running under your name and keep refreshing until the BGD process stops running.check the status and keep refreshing it until it says FINISHED.Use “Refresh all” to refresh the data being loaded.Use tcode RSMON > click on monitors > Data Store Objects > search the DSO. Check the PSA table for the data source and determine the data load. the active tables you can check 3 places to determine the status of the flow. * Exporting data to exporting data references GET REFERENCE OF <lt_data_ex> INTO er_data. APPEND <ls_data> TO <lt_data_ex>.* select single cost_center * into <lv_ecc_cc> * from zecc_cc * where id = <lv_id>.. If there are no numbers in the “activated up to request ID” column then hit the Activate button at the right…When you activate the DSO i. Use tcode: SM37 to determine whether the job is being scheduled and check the job details and the job logs to determine the status. DSO load When DSO load is running check the following: Use the monitor for the DTP and check the status for the data load. ENDLOOP. August 2010 50 .e. Determine whether the 2 fields: last loading request ID and activated upto request iD. use tcode: SM50 to check the BGD processes that are running. First obtain the job that is displayed the moment you hit the start button while activating the DSO and go to SM37 and check the job status…. Before anything make sure you have no data hiding in the “Initialization Options”….Determine that Data is completely loaded in the Persistence Storage Area ….follow this one step below If you find any data record as shown below before loading ….select it and delete it !!! . Hit the monitor button at the top…. Now click on the step-by-step analysis button Hit the analysis button at the bottom Find the PSA database table and data Find your datasource Click on Technical Attributes… go to Tcode: SE16 and enter the table name: PSA table: /BIC/B0001997000 there you go…simple way to find what data gets loaded in your Persistent Staging Area ! BI Metadata Launching Web Application Designer the first time downloads all the metadata stored in BI server the metadata is basically pulled from the metadata repository as shown below: . . Error DTP handling when a DTP is created the below screen is the standard options available: .Viewing reports on the web requires a repository manager in knowledge management which can pull all the metadata to display the reports. However. No update. the correct data will continue to load in the target info provider.1 Valid Records Update. Below is the screen shot showing the error data in the DTP run. Reporting Possible (request Green) Here the data will be available for reporting in the target info provider but the error records will not be shown in red. No reporting (request Red) Here the incorrect data will NOT be available in the target info provider and the error records will be shown in red. 2 Valid Records Update. You have to go to the error DTP and catch the incorrect data. No reporting This is the standard option available for DTP. . .Go to the error DTP and click on the cushion button to get the below screen. .Notice that the req del date is missing Correct the missing req del date. If you don’t want to take the above approach. If you see a “Yellow” it means that the data loading is going on since you have chosen 1 Valid Records Update.Go back and run the ERROR DTP again and below is the result: Now go back to the actual DTP and run it again and you might see this message: If you choose “Repair” it will not delete the existing records and continue to load the non-error records to be loaded in the target info provider. If you choose “Delete “ it will delete the existing records in the target info provider and start a fresh load. No reporting (request Red) Data Transfer Process (DTP) . go to the maintain link of the target info provider and click on the monitor and change the “red” status to green and that should solve the problem as shown below. When the DTP is executed with this option for the first time.Extraction Modes The data from source can be loaded into to target by using either Full or Delta mode. it brings all requests from the source into target and also sets the target in such way that it is initialized. Delta: No initialization is required if extraction mode „Delta‟ selected. . b.x by putting „No data transfer.If you selected transfer mode Delta. in info package have an option Initialization without data transfer. In 3. It loads all data/requests from source into target.x. Full: It behaves same like info package with option “Full”. If used these setting then from the second loads it will delete the overlapping request from the data target and keeps only the last loaded request in data target. If selected this option then the DTP will load request by request from source and keep the same request in target. In case delete overlapping request from data target have to select this option and use delete overlapping request process type in process chain. Get all new data request by request: If don‟t select this option then the DTP will load all new requests from source into a single request. you can define further parameters: a. delta status in source: Fetched‟. . Only get delta once: It can select this option where the most recent data required in data target. Have to select this option when the number of new requests is more in source and the amount of data volume is more. This can be achieved in 7. com SDN .sap.uac.com | BPX .sap.sap.SAP COMMUNITY NETWORK UAC .bpx.com | BA .sap.boc.com | .sdn. before transformations. Serial in dialog process (for debugging) (synchronous processing) This option is used if we want to execute the DTP in dialog process and this is primarily used for debugging. immediate parallel processing (asynchronous processing) This option is most used in background processing when used in process chains. it can set in case to store the data temporarily in data loading process of any process like before extraction. The various types of processing modes are shown below: 1. 2. No data transfer. . Extraction. delta status in source: fetched This option behaves exactly in the same way as explained above. It will help in data analyzing for failed data requests. Processing mode also depends on the type of source.Processing Mode: These modes detail the steps that are carried out during DTP execution (e. It processes the data packages in parallel. transformation.g. transfer etc). Serial extraction. Temporary Data Storage Options in DTP: In DTP. 3. Temporary store settings: . Error Handling using DTP: Options in error handling: . Also this request will not be available for reporting. No reporting (Request Red) Using this option all correct data is loaded to the cubes and incorrect data to the error stack.Deactivated Using this option error stack is not enabled at all. no reporting If there is erroneous /incorrect record and we have this option enabled in the DTP. Reporting Possible (Request Green) Using this option all correct data is loaded to the cubes and incorrect data to the error stack. . The erroneous records can be updated using the error DTP. Thus if the data load fails. The data will not be available for reporting until the erroneous records are updated and QM status is manually set to green. The erroneous records can be updated using the error DTP. Correction would mean reloading the entire data again. Hence for any failed records no data is written to the error stack. No update. Valid Records Updated. all the data needs to be reloaded again. The data will be available for reporting and process chains continue with the next steps. the load stops there with no data written to the error stack. Valid Records Update. How to Handle Error Records in Error Stack: Error stack: A request-based table (PSA table) into which erroneous data records from a data transfer process is written. At runtime. erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved. records from the source are written to the error stack. In below example explained error data handling using error DTP in invalid characteristics data records: . that is. The error stack is based on the data source. Modify the error record in error stack by clicking on edit button. DTP failed due to invalid characteristic values in records. Double click on error stack to see the error records in error stack. .Here. Create and execute error DTP to load these modified records in error stack by clicking on create error DTP of existing DTP of data source. . can see the modified 3 records loaded into target.This DTP load will create a new load request in target and load these modified records into target. . Here. If need to put all records into a same data package which are having same keys from loading source system.Importance of Semantic Groups This defined key fields in semantic group‟s works as key fields of data package while reading data from source system and error stock. . In this case select semantic keys in DTP those are required as keys in data package. No reporting (Request Red)‟ or „Valid Records Updated. . Number of Parallel Process: We can define the number of processes to be used in the DTP.In semantic group the key fields will be available if selected the error handling option „Valid Records Update. Reporting Possible (Request Green)‟ DTP Settings to Increase the Loading Performance 1. Don’t Load Large Value Data by Sing DTP Load Request: To avoid load large volume data into a single DTP request select Get all new data request by request in extraction tab . hence 3 data packages are processed in parallel.Here defined 3. 2. 3. Full Load to Target: In case full load into data target from DSO or first load from DSO to target always loads from Active table as it contains less number of records with Change log table. . . Handle Duplicate Records In case load to DSO. If you select this option then It will overwrite the master data record in case it time independent and will create multiple entries in case dime dependent master data. If select this option it reads first the aggregates tables instead of E and F table in case cube contains any aggregates. Load from Info Cube to Other Target: In case reading data from info cube to open hub destination it is best to use extraction from Aggregates. If loading to master data it can be handled by selecting “handling duplicate record keys” option in DTP.4. we can eliminate duplicate records by selecting option "Unique Data Records". 3. In order to activate it. it won’t be inactive (there is no value in the ID of Request column). go to the RSADMINCV1. you will get the settings . we need to choose the request from the manage and click on the activate button. most of the times it will only exist one in the first row of the list. Go to the Reconstruction tab and select the request we want to reconstruct and click the Reconstruction/Insert button. 8.How to re-construct a Load From DSO This guide is in reference to reconstruct a load in Opportunities Header DSO but the process is similar to any other object and can be a reference for any other DSO we need to fix data without taking new data from source system (CRM or R3). 2. 4. A new window with the list of available requests in the DSO will be shown. 1. what format of flat file we can upload in bw side. 6. let’s accept what is proposed by default and click on green button to allow the process to continue. 5. Delete the existing load from the DSO. we need continue monitoring until the request reach the green status.. Despite of the load will complete. let’s select it and click on the start button 7.this a Maint. the request in the manage will show with all the information populated and data wi How to check the setting of flat file in BW o check the settings of your flat file . A new window asking in which specific Server we want to run the process appears. Come back to the Request tab and monitor the progress of the load clicking therefresh button. Simply execute this . view with text BW: Settings for Flat Files. follow the below screen shot . Once the activation is done. we need to go for se11 . Delta Update Methods in Logistic We have three delta update methods in Logistic. . the extraction data is collected for the affected application instead of being collected in an extraction queue. the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue. in contrast to the current default settings (serialized V3 update). and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. They are kept there as long as the data is selected through an updating collective run and are processed.Direct Delta: With this update mode. up to 10000 delta extractions of documents for an LUW are compressed for each DataSource into the BW delta queue. . Un-serialized V3 Update: With this update mode. each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues. In doing so. the extraction data is transferred with each document posting directly into the BW delta queue. However. depending on the application. the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. In doing so. Queued Delta: With this update mode.