How to load unique records in one file and duplicate records in second file without losing records?

preview_player
Показать описание
Here you will learn a scenario where we separate unique and duplicatre data from source file without loosing our data. This is a complex mapping and is must for all experienced informatica resources. The solution may seem and easy approach but its not. But as you go along the lecture, you will learn it.

Design a mapping to load all unique products in one table and the duplicate rows in another table.
The first table should contain the following output
A
D

The second target should contain the following output
B
B
B
C
C

#informatica #informaticascenarios #informaticainterview
Рекомендации по теме
Комментарии
Автор

Sir if we take only expression, then how can we generate it? please suggest me. Second this if we consider this only mapping then i have a doubt why we have taken joiner here.

akhileshsoni
Автор

I think we dont need joiner in this scenario, we can just give count from aggregator to route data

niharikajain
Автор

I don't know why ppl not subscribe your video.... Your contents are really great bro 👍👍

kuttybagavathi
Автор

My doubt is, I can get the same result if in the router I can put count=1, then we don’t need the expr trans and we can join the sorter and the aggregator . I am getting the same result. Please can you help me to understand if there is any problem if we remove the expr ??

GeetLifeThoughts
Автор

If a source has 100 rows and last 3 rows needs to be loaded in a separate target, how it can be executed ?

Nirmalkumar-zrqe
Автор

If we using group by condition on the newvalue column doesn't it create a single Index for each newvalue column. At the end of aggregator output should be a 1, b1, c 3 ????and not A1, b1, c1, C2, c3

gaurishetye