Skip to main content

Informatica Performance Tuning Tips

Its been some time since the last post, but now i am back with a pretty interesting article

Performance tuning is a very vital aspect in any project. The faster and better interface is always appreciated and wins laurels

When the file size/ table which act as our sources are huge in size and the throughput is very low then we can go for partitioning. In partitioning we can split the data into various threads so data will be processed faster

 If the informatica we are using is a standard edition then partitioning feature will not be available so now what we can do is  increase DTM buffer size. It will be present in session properties under performance section. By default it will be in auto mode we can change it to say 220mb,1gb etc depending on size of the source data.This will increase the throughput

If the lookup transformation is taking is long time than we can go for joiner transformation to perform the same operation. As joiner will be better in performance when compared to lookup this is because lookup transformation stops the flow until it builds cache on the table whereas joiner doesn't do so the flow doesn't get hanged it keeps on running record by record.

If the target table is taking time to load drop the indexes in pre session command and recreate them in post session command. Keep the target load type as BULK. This increases the throughput.

If the update strategy transformation is taking long time to execute then try to perform the same operation via sql update statement either through sql transformation or post sql

Try as much filtering of records in initial stage of flow i.e. at source qualifier for table type of data give all where conditions which filters data in sql override

Use sorted input option for aggregator. so sort the data before data passes through aggregator transformation

Try performing much of the operations at database level rather than informatica

These are some of the tips for a better performance flows. Hope this article helps in  your informatica  performance

Comments

Popular posts from this blog

Comparing Objects in Informatica

We might face a scenario where there may be difference between PRODUCTION v/s SIT version of code or any environment or between different folders in same environment. In here we go for comparison of objects we can compare between mappings,sessions,workflows In Designer it would be present under "Mappings" tab we can find "Compare" option. In workflow manger under "Tasks & Workfows" tab we can find "Compare" option for tasks and workflows comparison respectively. However the easiest and probably the best practice would be by doing using Repository Manager.In Repository Manager under "Edit" tab we can find "Compare" option. The advantage of using Repository manager it compares all the objects at one go i.e. workflow,session and mapping. Hence reducing the effort of individually checking the mapping and session separately. Once we select the folder and corresponding workflow we Can click compare for checking out

Finding Duplicate records and Deleting Duplicate records in TERADATA

Requirement: Finding duplicates and removing duplicate records by retaining original record in TERADATA Suppose I am working in an office and My boss told me to enter the details of a person who entered in to office. I have below table structure. Create Table DUP_EXAMPLE ( PERSON_NAME VARCHAR2(50), PERSON_AGE INTEGER, ADDRS VARCHAR2(150), PURPOSE VARCHAR2(250), ENTERED_DATE DATE ) If a person enters more than once then I have to insert his details more than once. First time, I inserted below records. INSERT INTO DUP_EXAMPLE VALUES('Krishna reddy','25','BANGALORE','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Anirudh Allika','25','HYDERABAD','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Ashok Vunnam','25','CHENNAI','INTERVIEW',TO_DATE('01-JAN-2014',

Target Load Type - Normal or Bulk in Session Properties

We can see the Target load type ( Normal or Bulk) property in session under Mapping tab and we will go for Bulk to improve the performance of session to load large amount of data. SQL loader utility will be used for Bulk load and it will not create any database logs(redolog and undolog), it directly writes to data file.Transaction can not be rolled back as we don't have database logs.However,Bulk loading is very as compared to Normal loading. In target if you are using Primary Key or Primary Index or any constraints you can't use Bulk mode. We can see this property in the below snap shot.