Skip to main content

Performance tuning with Re-cache in Lookup transformation

Suppose you have a requirement where you should look up on the same table in more than one mapping and the corresponding sessions are in same workflow.
To improve performance of that workflow, you can check re-cache option in the mapping where the lookup will be executed for the first time. If you check Re-cache option in lookup, the lookup will create Cache in the mapping where you are using that lookup for the first time and other depending mappings will re-use the cache created by the first mapping.
 With this re-usability of lookup cache, performance will be increased. To do this, we should set dependency between sessions in the workflow and check re-cache in first session.
I will explain you with an example.
I have below mappings:
1)      m_PRESTGtoXREF_mbr_xref
2)      m_PRESTGtoXREF_grp_contc_xref
3)      m_PRESTGtoRARI_sbgrp_contc
And I have a lookup on table HCR3R_GRP_XREF and am using this in more than one mapping. In my workflow there is a dependency that m_PRESTGtoXREF_mbr_xref mapping should executed first and then m_PRESTGtoXREF_grp_contc_xref and  m_PRESTGtoRARI_sbgrp_contc.

In m_PRESTGtoXREF_mbr_xref mapping, I set re-cache option as shown in below snapshot.


And am using same lookup in remaining two mappings also. You should check Re-cache option in first mapping only, you should not check in remaining mappings.
If you run the workflow then you can see the below in sessions logs of last two mappings.

Warning: Cache file was created by mapping [m_PRESTGtoXREF_mbr_xref] but is being reused by mapping [m_PRESTGtoXREF_grp_contc_xref]

If you see the above content in log then it is confirmed that re-cache option is working properly. Suppose, you could not see the above content and you could see
Lookup Transformation [r_lkp_HCR3R_GRP_XREF]: Default sql to create lookup cache: SELECT GRP_SK,GRP_ID,SS_CD FROM HCR3R_GRP_XREF ORDER BY GRP_ID,SS_CD
 
then it is confirmed that your re-cache option is not working properly. You should debug it in the mapping.
Note: Based on your table names, the log file content will be changed. Here, I pasted my session log.

Comments

Popular posts from this blog

Comparing Objects in Informatica

We might face a scenario where there may be difference between PRODUCTION v/s SIT version of code or any environment or between different folders in same environment. In here we go for comparison of objects we can compare between mappings,sessions,workflows In Designer it would be present under "Mappings" tab we can find "Compare" option. In workflow manger under "Tasks & Workfows" tab we can find "Compare" option for tasks and workflows comparison respectively. However the easiest and probably the best practice would be by doing using Repository Manager.In Repository Manager under "Edit" tab we can find "Compare" option. The advantage of using Repository manager it compares all the objects at one go i.e. workflow,session and mapping. Hence reducing the effort of individually checking the mapping and session separately. Once we select the folder and corresponding workflow we Can click compare for checking out ...

Finding Duplicate records and Deleting Duplicate records in TERADATA

Requirement: Finding duplicates and removing duplicate records by retaining original record in TERADATA Suppose I am working in an office and My boss told me to enter the details of a person who entered in to office. I have below table structure. Create Table DUP_EXAMPLE ( PERSON_NAME VARCHAR2(50), PERSON_AGE INTEGER, ADDRS VARCHAR2(150), PURPOSE VARCHAR2(250), ENTERED_DATE DATE ) If a person enters more than once then I have to insert his details more than once. First time, I inserted below records. INSERT INTO DUP_EXAMPLE VALUES('Krishna reddy','25','BANGALORE','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Anirudh Allika','25','HYDERABAD','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Ashok Vunnam','25','CHENNAI','INTERVIEW',TO_DATE('01-JAN-2014',...

Target Load Type - Normal or Bulk in Session Properties

We can see the Target load type ( Normal or Bulk) property in session under Mapping tab and we will go for Bulk to improve the performance of session to load large amount of data. SQL loader utility will be used for Bulk load and it will not create any database logs(redolog and undolog), it directly writes to data file.Transaction can not be rolled back as we don't have database logs.However,Bulk loading is very as compared to Normal loading. In target if you are using Primary Key or Primary Index or any constraints you can't use Bulk mode. We can see this property in the below snap shot.