Skip to main content

Getting files which are created after a specific time from Unix Directory Using Touch and Find Commands

Generally, we all know that we can create a file using “touch” command in UNIX. If the given file does not exist then “touch” command will create that file, otherwise it will change the modification time of that file.

Syntax:
touch filename

Suppose, in our project, we will get source files with the naming convention “SOURCE_FILE_YYYYMMDD.dat”.  And today we have received SOURCE_FILE_20140117.dat and tomorrow, we will get SOURCE_FILE_20140118.dat etc. Every day we will receive one file.

We may have a requirement that to take all the files which are created after a specific time based on file created time from UNIX directory.

Step 1: create a dummy file with specific time using TOUCH command.

Syntax:
touch -t 'YYYYMMDDHH24MM.SS' dummy.dat
We should pass the date in YYYYMMDDHH24MM.SS format.

Example:
1)      If you give touch -t '201401240134.55' dummy.dat then the dummy file will be created and created time will be Jan 24, 2014.
2)      If you give touch -t '201401070134.55' dummy.dat then the dummy file will be created and created time will be Jan 07, 2014.
We can see the created/modification time using ls –ltr command in UNIX.

If we want to take all the files which are created after Jan 7th, 2014 01:34.55, then we can use
touch -t '201401070134.55' dummy.dat

Step 2: After done with the step 1, now we have to find the files which are created after Jan 7th, 2014 01:34.55. We can find using “find” command in UNIX.

For the above requirement, we can use below command.
If we are in the directory where we have source files then use
 find ./SOURCE_FILE_*.dat  –newer  dummy.dat
If we are not in the directory where we have source files and source files are there in /infa/sourcefiles then use   find  /infa/sourcefiles /SOURCE_FILE_*.dat  –newer  dummy.dat

Comments

Popular posts from this blog

Comparing Objects in Informatica

We might face a scenario where there may be difference between PRODUCTION v/s SIT version of code or any environment or between different folders in same environment. In here we go for comparison of objects we can compare between mappings,sessions,workflows In Designer it would be present under "Mappings" tab we can find "Compare" option. In workflow manger under "Tasks & Workfows" tab we can find "Compare" option for tasks and workflows comparison respectively. However the easiest and probably the best practice would be by doing using Repository Manager.In Repository Manager under "Edit" tab we can find "Compare" option. The advantage of using Repository manager it compares all the objects at one go i.e. workflow,session and mapping. Hence reducing the effort of individually checking the mapping and session separately. Once we select the folder and corresponding workflow we Can click compare for checking out

Finding Duplicate records and Deleting Duplicate records in TERADATA

Requirement: Finding duplicates and removing duplicate records by retaining original record in TERADATA Suppose I am working in an office and My boss told me to enter the details of a person who entered in to office. I have below table structure. Create Table DUP_EXAMPLE ( PERSON_NAME VARCHAR2(50), PERSON_AGE INTEGER, ADDRS VARCHAR2(150), PURPOSE VARCHAR2(250), ENTERED_DATE DATE ) If a person enters more than once then I have to insert his details more than once. First time, I inserted below records. INSERT INTO DUP_EXAMPLE VALUES('Krishna reddy','25','BANGALORE','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Anirudh Allika','25','HYDERABAD','GENERAL',TO_DATE('01-JAN-2014','DD-MON-YYYY')) INSERT INTO DUP_EXAMPLE VALUES('Ashok Vunnam','25','CHENNAI','INTERVIEW',TO_DATE('01-JAN-2014',

Target Load Type - Normal or Bulk in Session Properties

We can see the Target load type ( Normal or Bulk) property in session under Mapping tab and we will go for Bulk to improve the performance of session to load large amount of data. SQL loader utility will be used for Bulk load and it will not create any database logs(redolog and undolog), it directly writes to data file.Transaction can not be rolled back as we don't have database logs.However,Bulk loading is very as compared to Normal loading. In target if you are using Primary Key or Primary Index or any constraints you can't use Bulk mode. We can see this property in the below snap shot.