Informatica Interview Questions



Comments



Description

Question #1What is a control task? A control task is used to alter the normal processing of a workflow by stopping, aborting, or failing a workflow or worklet. Question #2 What is a pipeline partition and how does provide a session with higher performance? Within a mapping, a session can break apart different source qualifier to target pipelines into their own reader/transformation/writer thread(s). This allows the Integration Service to run the partition in parallel with other pipeline partitions in the same mapping. The parallelism creates a higher performing session. Question #3 What is the maximum number of partitions that can be defined for in a single pipeline? You can define up to 64 partitions at any partition point in a pipeline. Question #4 Pipeline partitions is designed to increase performance, however list one of it’s disadvantages? Increasing the number of partitions increases the load on the node. If the node does not contain enough CPU bandwidth, you can overload the system. Question #5 What is a dynamic session partition? A dynamic session partition is where the Integration Service scales the number of session partitions at runtime. The number of partitions is based on a number of factors including number of nodes in a grid or source database partitions. Question #6 List three dynamic partitioning configurations that cause a session to run with one partition 1. You set dynamic partitioning to the number of nodes in the grid, and the session does not run on a grid. 2. You create a user-defined SQL statement or a user-defined source filter. 3. You use dynamic partitioning with an Application Source Qualifier. Question #7 What is pushdown optimization? Pushdown optimization is a feature within Informatica PowerCenter that allows us to push the transformation logic in a mapping into SQL queries that are executed by the database. If not all mapping transformation logic can be translated into SQL, then the Integration Service will process what is left. Question #8 List the different types of pushdown optimization that can be configured? 1. Source-side pushdown optimization – The Integration Service pushes as much transformation logic as possible to the source database. 2. Target-side pushdown optimization – The Integration Service pushes as much transformation logic as possible to the target database. Source Qualifier. Sorter. Union. Filter. If the Integration Service cannot push all transformation logic to the database. Expression. Router. Target. Joiner. Lookup. Question #9 What databases are we able to configure pushdown optimization? IBM DB2 Microsoft SQL Server Netezza Oracle Sybase ASE Teradata Databases that use ODBC drivers Question #10 List several transformations that work with pushdown optimization to push logic to the database? Aggregator. Full pushdown optimization – The Integration Service attempts to push all transformation logic to the target database. Sequence Generator. it performs both source-side and target-side pushdown optimization. Update Strategy Question #11 What is real-time processing? .3. and webMethods sources. webMethods. 2. SAP. Examples include WebSphere MQ. Question #12 What types of real-time data can be processed with Informatica PowerCenter 1. MSMQ. Messages and message queues. MSMQ. These real-time sources can be leveraged by Informatica PowerCenter to process data on-demand. 3. Question #14 List three real-time processing terminating conditions? 1. Change data from PowerExchange change data capture sources. 2. Message count – Number of messages the Integration Service reads from a real-time source before it stops reading from the source. Web service messages.Data sources such as JMS. JMS. Question #13 What is a real-time processing terminating condition? A real-time processing terminating condition determines when the Integration Service stops reading messages from a real-time source and ends the session. SQP. Example includes receiving a message from a web service client through the Web Services Hub. and webservices can publish data in real-time. A session can be specifically configured for real-time processing. WebSphere MQ. . TIBCO. Idle time – Time Integration Service waits to receive messages before it stops reading from the source. TIBCO. or topics are used to recover the source messages or IDs. 2. Target-based commit – Data committed based on the number of target rows and the key constraints on the target table. tables. Size of the buffer blocks Question #17 List all configurable commit types? 1. 3. Commit interval 2. Recovery files. Question #16 What factors play a part in determining a commit point? 1. Source-based commit – Data committed based on the number of source rows. Question #18 . Reader time limit – Amount of time in seconds that the Integration Service reads source messages from the real-time source before it stops reading from the source Question #15 What is real-time processing message recovery? Real-time processing message recovery allows the Integration Service to recover unprocessed messages from a failed session. Recovery mode can be used to recover these unprocessed messaged. queues. Commit interval type 3. User-defined commit – Data committed based on transactions defined in the mapping properties.3. Question #22 . however it will tell the Integration to stop processing and committing data to targets after 60 seconds. but will continue writing and committing data to targets. the session gets terminated. Concurrent workflows can be configured in one of two ways: 1. A abort command works exactly like the stop command. Question #19 What functionality is provided by the Integration Service when error logging is enabled? Error logging builds a cumulative set of error records in a error log file or error table created by the Integration Service. Question #20 What is the difference between stopping and aborting a workflow session task? A stop command tells the Integration Service to stop reading session data. Question #21 What is a concurrent workflow? A concurrent workflow is a workflow that can run as multiple instances concurrently. Configure unique workflow instances to run concurrently.What performance concerns should you be aware of when logging error rows? Session performance may decrease when logging row errors because the Integration Service processes one row at a time instead of a block of rows at once. Allow concurrent workflows with the same instance name. 2. If all processes are not complete after this time out period. For example. Question #25 How are mapplet parameters and variables defined withing a parameter file different? Mapplet parameters and variables are different because they must be preceded with the mapplet name they were definee within. would be set to a value of 10 in a related parameter file by using syntax: MyMapplet. workflow and worklet variables. Question #26 What is an SQL Transformation in Informatica? . a parameter by name of MyParameter.MyParameter=10.What is Informatica PowerCenter grid processing and its benefits? Grid processing is a feature of PowerCenter that enables workflows and sessions to be run across multiple domain nodes. defined within mapplet MyMapplet. service process variables. Within the session task. Question #23 List the types of parameters and variables that can be defined within a parameter file? Service variables. Within the workflow. PowerCenter grid’s parallel processing provides increased performance and scalability. and mapping parameters and variables. Question #24 With PowerCenter. what two locations can one specify a parameter file? 1. 2. session parameters. SQL transformations have two modes. semi-structured. ACORD. As these rows are processed or inserted into the lookup’s target table. inserted. data can be manipulated. Script mode allows for external located script files to be called to execute SQL. It allows for runtime SQL processing. It leverages the Data Transformation application to transform unstructured. the lookup cache is also updated dynamically. Question #27 What is dynamic lookup cache? Dynamic lookup cache is cache that has been built from the first lookup request.The SQL transformation is is active. passive. and connected. Once data has been transformed by Data Transformation. HIPAA. EDIFACT. and SWIFT. variables created. updated. Question #28 What is a Unstructured Data transformation? The Unstructured Data transformation is active. It allows data to be retrieved. and structured file formats such as messaging formats. PDF documents. EDI-X12. Query mode allows for SQL to be placed within the transformation’s editor to execute SQL logic. and output ports generated. script mode and query mode. and deleted midstream in a mapping pipeline. passive. Each subsequent row that passes through the lookup will query the cache. and connected. HL7. We can write conditional . it can be returned in a mapping pipeline and further transformed and/or loaded to an appropriate target Question #1 What is an Expression Transformation in Informatica? An expression transformation in Informatica is a common Powercenter mapping transformation. It is used to transform data passed through it one record at a time. Within an expression. The expression transformation is passive and connected. HTML pages. Question #4 What is a Master Outer Join in Informatica? A master outer join in Informatica is a specific join type setting within a joiner transformation. Checkout my Decode in Informatica post for more in depth Informatica interview question knowledge. but let’s focus on the master outer join. We can also check the “Case Sensitive” property to sort data with case sensitivity in mind. It functions very much like a CASE statement in SQL. The sorter transformation contains some additional functionality that is very useful. The joiner transformation allows us to join two separate pipelines of data by specifying key port(s) from each pipeline. Question #2 What is a Sorter Transformation in Informatica? The sorter transformation in Informatica helps us sort collections of data by port or ports. This functionality very much like an ORDER BY SQL statement where we specify certain field(s) we want to ORDER BY. Question #3 What is a Decode in Informatica? A decode in Informatica is a function used within an Expression Transformation.statements within output ports or variables to help transform data according to our business requirements. Checkout my Sorter Transformation in Informatica post for more in depth Informatica interview question knowledge. The decode allows us to search for specific values in a record. Checkout my Expression Transformation in Informatica post for more in depth Informatica interview question knowledge. and the matched rows from the master pipeline. We define each pipeline as either a master or detail. This master outer join is very much like a LEFT OUTER JOIN in SQL. For example we can check the “Distinct Output Rows” property to pass only distinct rows through a pipeline. There are other join types. . considering the detail pipeline as the LEFT side. and then set a corresponding port value based on search results. The sorter transformation is active and connected. So the master outer join returns all rows from the detail pipeline. Reuse is essential to building efficient business intelligence systems quickly. group filter condition. Checkout my Mapplet in Informatica post for more in depth Informatica interview question knowledge. Checkout my Router Transformation in Informatica post for more in depth Informatica interview question knowledge.Checkout my Master Outer Join in Informatica post for more in depth Informatica interview question knowledge. We can set this with any type of conditional logic within the update strategy expression property. This information lets the Integration Service know how to treat each record passed to a target. Within InformaticaPowercenter is a mapplet designer where we can copy existing mapping logic or create new logic we want to reuse in multiple mappings. Checkout my Update Strategy Transformation in Informatica post for more in depth Informatica interview question knowledge. Question #6 What is an Update Strategy Transformation in Informatica The Update Strategy Transformation in Informatica helps us tag each record passed through it for insert. update. That is essentially what is a mapplet is for. Keep in mind a single record may be copied to many records if it matches multiple group filter conditions. Many times we use the router to check specific primary key conditions to determine if we want to insert. The router checks one record at a time with the group filter condition and routes it down the appropriate path or paths. . We do this by using the transformation’s group tab. Question #7 What is a Router Transformation in Informatica The Router Transformation in Informatica allows us to split a single pipeline of data into multiple. delete. or delete data from a target table. Question #5 What is a Mapplet in Informatica? The Mapplet in Informatica is essential to creating reusable mappings in Informatica. We can create as many groups as we want and therefore as many new pipelines. or reject. update. This functionality is essential to control how data is stored within our business intelligence databases. We can create input and output transformations within a mapplet that define input and output ports for our mapplet within a mapping. Question #9 What is a Filter Transformation in Informatica? A Filter Transformation in Informatica is active and connected. Question #10 What is a Sequence Generator Transformation in Informatica? The Sequence Generator Transformation in Informatica is both passive and connected. are allowed through based on evaluating TRUE for the developer defined filter condition. Question #11 What is a Joiner Transformation in Informatica? The Joiner Transformation in Informatica is used to join two data pipelines in a mapping. Checkout my Sequence Generator Transformation in Informatica post for more in depth Informatica interview questions knowledge. we will return 10 records. We specify one or more ports from each pipeline as a join condition to relate each . To help mapping performance. The Number of Ranks attribute determines how many records to return. Records passed through it. always place filter condition as early in the mapping data flow as possible. we must set a Top/Bottom attribute value and Number of Ranks attribute value. We would generally leverage the NEXTVAL port to accomplish this. Top/Bottom set a descending (Top) or ascending (Bottom) rank order. Since its primary purpose is to generate integer values from both its NEXTVAL and CURRVAL default output ports. Checkout my Filter Transformation in Informatica post for more in depth Informatica interview question knowledge. Once we have selected which port to rank on. This will reduce processing needed by downstream mapping transformations. You can only have one rank port. it is very helpful in creating new surrogate key values. If we set it to 10.Question #8 What is a Rank Transformation in Informatica? The Rank Transformation in Informatica lets us sort and rank the top or bottom set of records based on a specific port. Checkout my Rank Transformation in Informatica post for more in depth Informatica interview question knowledge. Checkout my Joiner Transformation in Informatica post for more in depth Informatica interview questions knowledge. Detail Outer. an active transformation reduces or adds records to our target record pipeline count. Question #13 What is a Aggregator Transformation in Informatica? A Aggregator Transformation in Informatica is very useful for aggregating data in a Informatica Powercenter mapping. Aggregation is an important part of integrating data so make sure you understand the aggregator transformation in Informatica. XML files. It allows us to incorporate two pipelines of data with the same port names and data types. Question #14 What is a Union Transformation in Informatica? A Union Transformation in Informatica functions just like a UNION ALL statement in SQL. databases. A passive transformation is one where records pass through without ever dropping or adding records. In contrast. RIGHT OUTER. Keep in mind that the union transformation . This functionality is essential to relating data from multiple sources in a mapping including flat files. The aggregator behaves similarly to a GROUP BY statement in SQL. MAX. Some transformations such as the lookup transformation in Informatica are both active and passive. MIN. If we do not apply an aggregate function to a non group by port. Question #12 What are Active and Passive Transformations in Informatica? Most Informatica transformations are either passive or active. With the aggregator transformation we can select a specific port or ports to aggregate by and apply aggregator type functions (SUM. Full Outer) similar to our join options in SQL (INNER. LEFT OUTER. FULL OUTER). the last records value will be set for this port. Checkout my Aggregator Transformation in Informatica post for more in depth Informatica interview question knowledge.pipeline. Within the joiner transformation we can specify different join types (Normal. Master Outer. Checkout my Active and Passive Transformations in Informatica post for more in depth Informatica interview question knowledge. and more. into a single pipeline of data. etc…) to all non group by ports. we need to determine if our new historical or ODS records contain data that differs from our target dimension. we can use data concatenation to merge our lookup data with our original source data in an expression transformation. After getting values for all our attributes from our dimension table. So with a worklet we can add tasks (session. . but this functionality does not currently exist through the union transformation in Informatica. we can filter out records that have not changed and insert newly changed records. After doing this compare. The difference with the worklet is. Before populating a dimension table with similar attributes. Checkout my Union Transformation in Informatica post for more in depth Informatica interview question knowledge. Again. saving us time by developing once and keeping logic in one place instead of many. A great function to use at this point is MD5. this reuse is very helpful. In our source qualifier we should use some custom SQL to limit the records that have been updated since our last run.does not collapse records with the same port values into a single distinct record. Checkout my Informatica Worklets post for more in depth Informatica interview question knowledge. This concept is very similar to the mapplet. etc…) and logic just as we would in a workflow. depending on a company’s architecture. we can add a worklet to any number of workflows we want. These tables may reside in an ODS or set of historical table in a data warehouse. The general idea is to then place lookup transformation(s) pointing to our target dimension table to retrieve attributes comparable to our source attributes. In SQL we could write a UNION statement for this instead of UNION ALL. command. However mapplets apply re-usability at the mapping level. Question #15 What is a Informatica Worklet? Informatica Worklets allow us to apply re-usability at the Informatica workflow level. Question #16 Walk through a simple Informatica mapping on how to load a Type 2 slowly changing dimension (SCD)? A typical mapping of this nature is going to start with a source table or set of tables. This will allow us to compare our recently updated record attributes to our existing dimension record attributes. Question #22 List each PowerCenter client application and their basic purpose? . There are a variety of tasks that allow a developer to call code to be executed in a specific order. UNION or UNION ALL? UNION ALL Question #20 What is an Informatica PowerCenter Task? An Informatica PowerCenter task allows us to build workflows and worklets. transform. So we might say a mapping logically defines the ETL process. A mapping consists of at least one source and one target. in the order defined. etc…). Question #18 What is an Informatica mapping? A mapping exists in Informatica PowerCenter Mapping Designer. A series of session tasks for example can be used to initiate mappings to extract. Many times a variety of transformation are used in a mapping between sources and targets to modify/enhance data according to business needs. A final workflow can therefore be started which will begin to run tasks within it. Altering data in this fashion is what it means to transform data. It is constructed through connecting tasks in a logical fashion to execute specific sets of code (mappings. and load data. Question #21 Describe an Informatica PowerCenter Workflow? A workflow is developed in the workflow manager. scripts.Question #17 What does it mean to transform data? Within PowerCenter we have many transformations that can modify record counts and change/add attribute values. Question #19 What SQL statement is comparable to a Union Transformation. start/stop. Question #23 Describe the purpose of a variable port within an expression transformation? A variable port is designated by checking the V checkbox next to the port designated as a variable. folders. objects. updgrade/delete. A major way I have personally used variable ports is to generate a new surrogate key as a new insert record is found. A variable is set and persist for the entire data set passed through the expression transformation. The useful feature can be used in conjunction with conditional logic to assist with applying business logic of some kind. Question #24 What is the purpose of the INITCAP function? The INITCAP function capitalizes the first letter in each word of a string and converts all other letters to lowercase. and groups Administration Console  Performs domain and repository service tasks such as create/configure. EX: INITCAP(IN_DATA) .Repository Manager  manages repository connections. users. and backup/restore nodes and repository services Designer  Creates ETL Mappings Workflow Manager  Creates and starts workflows/tasks Workflow Monitor  Monitors and controls workflows. Access to runtime metrics and session logs are also available. all shortcuts inherit the changes. transformation. since it can reduce the number records passed to downstream transformations. Question #26 List the different join types within a Joiner transformation and describe each one? Normal Join – keeps only matching rows based on the condition Master Outer Join – keeps all rows from the detail and matching rows from master Detail Outer Join – keeps all rows from the master and matching rows from detail Full Outer Join – keeps all rows from both master and detail Question #27 What is a PowerCenter Shortcut and what are its benefits? A shortcut is a dynamic link to an original Informatica PowerCenter object. If the original object is edited. should active transformation be placed at the beginning or end of a mapping? The beginning of the mapping. Question #28 What does the Revert to Saved feature do? If unwanted changes are made to a source. reducing mapping overhead. these objects will be reverted to their previously saved version. mapplet. Question #29 What does Auto-link by Name in Designer do? . target. or mapping.IN_DATA RETURN VALUE informatica interview questions Informatica Interview Questions Question #25 When thinking performance. Mapping 4. Question #34 What is the recommended order for Optimizing Informatica PowerCenter performance tuning bottlenecks? 1. Transformation . Source 3. by depressing the Ctrl key while dragging. Question #31 Can you copy Informatica objects between mappings in different folders? No Question #32 When would you use a Joiner transformation rather than joining in a Source Qualifier? When you need to relate data from different databases or perhaps a source flat file. Question #30 What does the Scale-to-Fit option do in Designer? This option zooms in or out the designer workspace to allow for every object in a mapping to fit within the viewable workspace.It adds links between input and output ports across transformations with the same port names (case sensitive). Target 2. Question #33 Can you copy a reusable transformation as a non-reusable transformation into a mapping? Yes. Grid Deployments 7. Before joining. Benefits include decreased development time and mapping consistency. data must be pre-sorted by the join key. Question #36 Describe Data Concatenation? We can bring together different pieces of the same record with data concatenation. Question #38 What performs better. We must configure the joiner to accept sorted input. a single router transformation or multiple filter transformations? Why? A single router transformation because a record is read into the transformation once instead of multiple reads of the same row data with a filter transformation. 3. Session 6. . PowerCenter Components 8.5. This is only possible if combining branches of the same source pipeline where neither branch contains an active transformation. We must place at least 1 transformation between the source qualifier and the joiner in at least 1 branch. Re-usability is a Velocity best practice. Question #37 What are the rules for a self-join in an Informatica mapping? 1. 2. System Question #35 Should we strive to create more or less re-usable transformations? We should strive to create more re-usable transformations. Question #39 What is an Informatica Task? An Informatica task allows us to build PowerCenter workflows and worklets. etc…). in the order defined. this process is non-reversible. A series of session tasks for example can be used to initiate mappings to extract. Question #44 What does the debugger option Next Instance do? . There are a variety of tasks that allow a developer to call code to be executed in a specific order. Question #40 Describe an Informatica Workflow? A workflow is developed in the workflow manager. It is constructed through connecting tasks in a logical fashion to execute specific sets of code (mappings. scripts. and load data. A final workflow can therefore be started which will begin to run tasks within it. transform. Question #41 List the different join types within a Joiner transformation and describe each one? Normal Join – keeps only matching rows based on the condition Master Outer Join – keeps all rows from the detail and matching rows from master Detail Outer Join – keeps all rows from the master and matching rows from detail Full Outer Join – keeps all rows from both master and detail Question #42 Can you copy objects between mappings in different PowerCenter folders? No Question #43 Can we make a reusable transformation non-reusable? No. Question #47 What is the Velocity best practice prefix naming standard for a shortcut object and reusable transformation? Reusable Transformation – re or RE Shortcut Object – sc_ or SC_ Question #48 What is persistent lookup cache and its advantages? Persistent lookup cache is stored on the server hard drive available in the next session. performance is increased the next time the lookup is called since a database query does not need to occur. not the debugger. Question #49 What happens to data overflow when not enough memory is specified in the index and data cache properties of lookup transformation? It is written to hard disk. the designer output window will show you why a mapping is invalid. Question #46 Can the debugger help you determine why a mapping is invalid? No. Since the data is stored on the Informatica server. . Question #50 What is the rule of thumb when deciding whether or not to cache a lookup or not? Cache if the number of mapping records passing through the lookup is greater relative to the lookup table’s record number (and size). Question #45 What does the debugger option Step to Instance do? Runs until it reaches a breakpoint or reaches a selected transformation.Runs until it reaches the next transformation or satisfies a breakpoint. Question #52 What does the View Object Dependencies feature in Designer allow us to do? Developers can identify objects that may be affected by making changes to a mapping. mapplet. Question #53 What is the difference between PowerCenter variables and parameters? Parameters in Informatica are constant values (datatypes strings. etc…). Variables on the other hand can be constant or change values within a single session run. Col2 and Col3. numbers. There is only 1 row in the table as follows: Col1 Col2 Col3 —————– a b c There is target table contain only 1 column Col. tables. columns. or ports in the currently open mapping. Informatica interview questions 1 We have a target source table containing 3 columns: Col1. and its sources/targets. Design a mapping so that the target table contains 3 rows as follows: Col —– a b c Informatica interview questions 2 .Question #51 What does the Find in Workspace feature allow us to do in Mapping Designer? Perform string searches for the names of objects. Informatica interview questions 5 A source table contains emp_name and salary columns. Develop an Informatica mapping to load all records with 5th highest salary into the target table. Informatica interview questions 6 Validation rules for connecting transformations in Informatica? .C. <!–[endif]–> Informatica interview questions 3 There is a source table containing 2 columns Col1 and Col2 with data as follows: Col1 —— a b a a b x Col2 —— l p m n q y Design a mapping to load a target table with following values from the above mentioned source: Col1 Col2 —— a b x —— l. Then again from 31 to 40 in A.There is a source table that contains duplicate rows. 41 to 50 in B and 51 to 60 in C……So on up to last record. Informatica interview questions 6 Let’s say I have more than have record in source table and I have 3 destination table A. q y Informatica interview questions 4 Design an Informatica mapping to load first half records to 1 target while other half records to a separate target. n p.B. I have to insert first 1 to 10 records in A then 11 to 20 in B and 21 to 30 in C. m. Designs a mapping to load all the unique rows in 1 target while all the duplicate rows (only 1 occurrence) in another target. Informatica interview questions 9 Input file ——— 10 10 10 20 20 30 output file ———1 2 3 4 5 6 Informatica interview questions 10 Input file .Informatica interview questions 7 Source is a flat file and want to load unique and duplicate records separately into two separate targets.den again 10 is there so it will be 2. right?? Informatica interview questions 8 Input file ——— 10 10 10 20 20 30 output file ———— 1 2 3 1 2 1 scenario-it will count the no of records for example in this above case first 10 is there so it will count 1. when 20 comes it will be 1 again. 104 ram 104 —output file ———id name — —101 ramesh 102 shyam 103 hari 104 ram Informatica interview questions 12 There are 2 tables(input table) table aa table bb ——– ——— id name id name — —– — —101 ramesh 106 harish 102 shyam 103 hari .——— 10 10 10 20 20 30 output file ———-> 1 1 1 2 2 3 Informatica interview questions 11 There are 2 tables(input table) table aa table bb ——– ——— id name id name — —– — —101 ramesh 106 harish 102 shyam 103 hari 103 —. 103 —.104 ram 104 —output file ———id name — —101 ramesh 102 shyam 103 hari 104 ram Informatica interview questions 12 table aa(input file) —————— id name — —10 aa 10 bb 10 cc 20 aa 20 bb 30 aa Output —– id name1 name2 name3 — —— —— —– 10 aa bb cc 20 aa bb — 30 aa — — Informatica interview questions 14 table aa(input file) —————— id name — —10 a 10 b 10 c 20 d 20 e . * If the Header table has error value or no value (NULL) then those records and their corresponding child records in the SUBHEADER and DETAIL tables should be rejected from the target (TARGET1.TARGET 2 or TARGET3. * If the HEADER table record is valid and the SUBHEADER or DETAIL table record also has valid records only then the data should be loaded into the target TARGET1.TARGET 2 or TARGET3).output ——id name — —10 abc 20 de Informatica interview questions 15 In the below scenario how can I split the row into multiple depending on date range? The source rows are as ID Value from_date(mm/dd) To_date(mm/dd) 1 $10 1/2 1/3 2 $5 1/5 1/8 3 $20 1/9 1/11 The target should be ID Value Date 1 $10 1/2 1 $10 1/3 2 $5 1/5 2 $5 1/6 2 $5 1/7 2 $5 1/8 3 $20 1/9 3 $20 1/10 3 $20 1/11 What is the informatica solution? Informatica interview questions 16 How is the following be achieved with single Informatica Mapping. . * If the HEADER table record is valid. but the SUBHEADER or DETAIL table record has an error value (NULL) then the no data should be loaded into either of the target TARGET1.TARGET 2 and TARGET3. 3.4 and i want load duplicate records like 1.2.4 then i want load unique records in one target like 2.HEADER C1 C2 C3 C4 C5 C6 1 ABC null null C1 2 ECI 756 CENTRAL TUBE C2 3 GTH 567 PINCDE C3 SUBHEADER C1 C2 C3 C4 C5 C6 1 01001 VALUE3 748 543 1 01002 VALUE4 33 22 1 01003 VALUE6 23 11 2 02001 AAP1 334 443 2 02002 AAP2 44 22 3 03001 RADAR2 null 33 3 03002 RADAR3 null 234 3 03003 RADAR4 83 31 DETAIL C1 C2 C3 C4 C5 C6 1 D01 TXXD2 748 543 1 D02 TXXD3 33 22 1 D03 TXXD4 23 11 2 D01 PXXD2 56 224 2 D02 PXXD3 666 332 —————————————————————————– TARGET1 2 XYZ 756 CENTRALTUBE CITY2 TARGET2 2 02001 AAP1 334 443 2 02002 AAP2 44 22 TARGET3 2 D01 PXXD2 56 224 2 D02 PXXD3 666 332 ————————————————————————– Informatica interview questions 17 If i had source like unique & duplicate records like 1.3.3.1.3 .1. Informatica interview questions 18 I Have 100 Records in a relational table and i want to load the record in 3 targets . Second Run : Only department 2 Third Run : Only department 3 How can we achieve this? . Informatica interview questions 19 There are three columns empid. salmonth. First Run : It should populate department 1. E-no e-name sal and dept 0101 Max 100 1 0102 steve 200 2 0103 Alex 300 3 0104 Sean 76 1 0105 swaroop 120.what are the tx used in this. first records goes to target 1 and second to target 2 and third to target 3 and so on .1000 101 febuary 1000 … like twelve rows are there then my required out put is like contains 13 columns empid jan feb march ……. 1000 etc Informatica interview questions 20 I have a source as a file or db table. dec and the values are 101 1000. 2 If i Want to run one session 3 times. 1000. sal contains the values 101.january.
Copyright © 2024 DOKUMEN.SITE Inc.