Skip to main content

Informatica Transformation: Source Qualifier

Source Qualifier is Active Transformation
Source Qualifier is connected transformation. It should be there in pipeline.
This is default transformation which comes along with source instance when you drag and drop into mapping area.But there is some exceptional case with XML and COBOL sources.
This is mandatory transformation to read data from sources and it converts the source data types to the informatica native data types. So, you should not alter the data types of the ports in the source qualifier transformation.

If we are not using Push Down Optimization, then every port of source qualifier should have link from source instance. Otherwise, workflow will fail.
If we are using Push Down Optimization, then at-least one port of source qualifier should have link from source instance. Otherwise, workflow will fail.


Source Qualifier Properties:
Below are the Source qualifier properties and there are enable when we are using tables as sources. In case of files as source, these will be disabled.



If we double click source qualifier transformation, we can see above properties window.

SQL Query:
We can pass the override query in this property. We can perform joins in the override query and we can filter out data by adding WHERE clause in override query.
 If we are providing override query, we need to take care of order of columns. The order of columns in the override query should be same as the order of Ports in Port tab of source Qualifier.
If we open SQL Query property, it looks like below.



User Defined Join:
If we have two source instances and we are pulling fields from both the instances into one source qualifier, we need to provide join condition to get data. There are two ways to provide joins. One way is to use SQL Query property(explained above) and provide override query. Another way is to use "User Defined Join" property to pass join condition.

Lets assume there are two instances Employee and Department and we are joining based on employee number field. In this case, we need to pass  below in this property field.

Employee.emp_num=Department.emp_num

We should not give ON keyword when we are passing user defined join.If we give ON keyword then workflow will fail.


Source Filter:
If we want to filter out records while reading from source, we can pass in this section. However, there is another way to filter out records in the override query. If we are not giving override query and if we want to filter records, we can use this property.

Lets assume we are using Employee table and if we want to filter records with employee number 100 then we need to give below.

Employee.emp_num=100

We should not give WHERE keyword when we are passing Source filter in this property. If we give WHERE then workflow will fail.


Select Distinct:
If we select this option then duplicate records will be eliminated, only distinct records will be populated.


Pre and Post SQL:

We can use these properties to execute any queries that needs t obe executed before and after execution of mapping. This is similar to session level PRE and POST SQL properties. Already a blog has been posted. We can get more details about these in the following link,

http://dwbuddy.blogspot.in/2014/01/pre-and-post-sqlsession-properties-at.html



Thank you and if you have any question, please do let us know.

Comments

  1. Excellent article. Very interesting to read. I really love to read such a nice article. Thanks! keep rocking.Informatica Online Course Bangalore

    ReplyDelete

Post a Comment

Popular posts from this blog

Comparing Objects in Informatica

We might face a scenario where there may be difference between PRODUCTION v/s SIT version of code or any environment or between different folders in same environment. In here we go for comparison of objects we can compare between mappings,sessions,workflows In Designer it would be present under "Mappings" tab we can find "Compare" option. In workflow manger under "Tasks & Workfows" tab we can find "Compare" option for tasks and workflows comparison respectively. However the easiest and probably the best practice would be by doing using Repository Manager.In Repository Manager under "Edit" tab we can find "Compare" option. The advantage of using Repository manager it compares all the objects at one go i.e. workflow,session and mapping. Hence reducing the effort of individually checking the mapping and session separately. Once we select the folder and corresponding workflow we Can click compare for checking out ...

Finding Duplicate records and Deleting Duplicate records in TERADATA

Requirement: Finding duplicates and removing duplicate records by retaining original record in TERADATA Suppose I am working in an office and My boss told me to enter the details of a person who entered in to office. I have below table structure. Create Table DUP_EXAMPLE ( PERSON_NAME VARCHAR2(50), PERSON_AGE INTEGER, ADDRS VARCHAR2(150), PURPOSE VARCHAR2(250), ENTERED_DATE DATE ) If a person enters more than once then I have to insert his details more than once. First time, I inserted below records. INSERT INTO DUP_EXAMPLE VALUES('Krishna reddy','25','BANGALORE','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Anirudh Allika','25','HYDERABAD','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Ashok Vunnam','25','CHENNAI','INTERVIEW',TO_DATE('01-JAN-2014',...

Target Load Type - Normal or Bulk in Session Properties

We can see the Target load type ( Normal or Bulk) property in session under Mapping tab and we will go for Bulk to improve the performance of session to load large amount of data. SQL loader utility will be used for Bulk load and it will not create any database logs(redolog and undolog), it directly writes to data file.Transaction can not be rolled back as we don't have database logs.However,Bulk loading is very as compared to Normal loading. In target if you are using Primary Key or Primary Index or any constraints you can't use Bulk mode. We can see this property in the below snap shot.