Skip to main content

Debugger in Informatica


We usually go for Debugger in informatica when we want to check at which level (i.e. transformation) in a mapping the data is getting wrongly populated or missing functionality.

Suppose we are expecting a date field to be populated in YYYYMMDD format but it is populating in DDMMYYYY format.Then using debugger we can check in which expression or source qualifier the data is coming wrong.

While using the debugger it's better to restrict the data to lesser records(preferably single record)  i.e. we can restrict at SQ level by passing condition e.g where emp_id='1001' so as to get only 1001 record.

In designer under mappings tab we need to click debugger.Then start debugger (F9) after which we will be getting option to choose integration service and we will be having three options.

a) use an existing session instance
b) use an existing reusable session instance
c) create a debug session instance.




we can click the existing session instance. After which it will ask for select session and then ask for which all targets are needed. We can select all the Target's or any particular one of which we want to view data.

Then the debugger gets started. We need to press F10 to move from one transformation to another if we want to jump directly to a particular transformation then we need to press ctrl + F10 after selecting that particular transformation.

If we have multiple rows then we need to press multiple time ctrl+f10 to view all the records say we have 5 records and we want to see all the 5 records then we need to press 5 times to view all the 5 records.

Debugger will be looking like below



We can stop debugger by again going to mappings tab.

Note:once we stop debugger we can see in wf monitor that particular session would be in failed state.

Hope this article helps you in understanding debugger

Comments

  1. Really nice blog post. provided helpful information. I hope that you will post more updates like thisInformatica Training

    ReplyDelete

Post a Comment

Popular posts from this blog

Comparing Objects in Informatica

We might face a scenario where there may be difference between PRODUCTION v/s SIT version of code or any environment or between different folders in same environment. In here we go for comparison of objects we can compare between mappings,sessions,workflows In Designer it would be present under "Mappings" tab we can find "Compare" option. In workflow manger under "Tasks & Workfows" tab we can find "Compare" option for tasks and workflows comparison respectively. However the easiest and probably the best practice would be by doing using Repository Manager.In Repository Manager under "Edit" tab we can find "Compare" option. The advantage of using Repository manager it compares all the objects at one go i.e. workflow,session and mapping. Hence reducing the effort of individually checking the mapping and session separately. Once we select the folder and corresponding workflow we Can click compare for checking out ...

Finding Duplicate records and Deleting Duplicate records in TERADATA

Requirement: Finding duplicates and removing duplicate records by retaining original record in TERADATA Suppose I am working in an office and My boss told me to enter the details of a person who entered in to office. I have below table structure. Create Table DUP_EXAMPLE ( PERSON_NAME VARCHAR2(50), PERSON_AGE INTEGER, ADDRS VARCHAR2(150), PURPOSE VARCHAR2(250), ENTERED_DATE DATE ) If a person enters more than once then I have to insert his details more than once. First time, I inserted below records. INSERT INTO DUP_EXAMPLE VALUES('Krishna reddy','25','BANGALORE','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Anirudh Allika','25','HYDERABAD','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Ashok Vunnam','25','CHENNAI','INTERVIEW',TO_DATE('01-JAN-2014',...

Target Load Type - Normal or Bulk in Session Properties

We can see the Target load type ( Normal or Bulk) property in session under Mapping tab and we will go for Bulk to improve the performance of session to load large amount of data. SQL loader utility will be used for Bulk load and it will not create any database logs(redolog and undolog), it directly writes to data file.Transaction can not be rolled back as we don't have database logs.However,Bulk loading is very as compared to Normal loading. In target if you are using Primary Key or Primary Index or any constraints you can't use Bulk mode. We can see this property in the below snap shot.