0% found this document useful (0 votes)
317 views61 pages

Senarios50 (Autosaved)

Uploaded by

Pavan Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
317 views61 pages

Senarios50 (Autosaved)

Uploaded by

Pavan Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

50 senarios

senario1
INPUT
Out file should contain all fields.
• ID Name Salary dept_no
Condition is :1)if the salary is greater than 30k, print all
• 1 AA 26000 1
fields except the dept_no i.e dept_no field should be blank
• 2 BB 35000 2
2)if the salary is less than 30k, print all fields except the
• 3 CC 28000 3 salary i.e salary field should be blank
• 4 dd 24000 1 eg. record.
• 5 ee 31000 2
ID Name salary dept_no
• 6 ff 45000 3
1 AA 1
• 7 xx 20000 4
2 BB 35000
• 8 yy 35000 2

• 9 zz 32000 4
senario2
Src Target is a file. we have
1 A 10 to replace last duplicate
2 B 20 value with null.
3 B 20 Tgt
4 C 30 1 A 10
5 D 30 2 B 20
6 E 30 3 B NULL
4 C 30
5 D 30
6 E NULL
Scenario 3:
src: tgt:
Id sal Id sal total
1 300 1 300 1700
2 400 2 400 1700
3 500 3 500 1700
4 500 4 500 1700

Rollup with null key find total and then join with key
as null
Scenario 4:

• Source table
• name No
• A 10 in sort key decending and then fuse
• B 20
• C 30
• Output of Target table will be like this:
• Name No
• A 30
• B 20
• C 10
Scenario 5:

• source
• A
• B
• C
• D
• E
• Target
• AOOOO
• OBOOO
• OOCOO
• OOODO
• OOOOE
Scenario 6:
• My table has only
one column
having the
number data. I
want to fetch
only prime nos
from the column.
For ex.
• Src Col1 --
1,2,3,4,5,6,7,8,9
• Tgt Col1 -- 2,3,5,7
Scenario 7:
Src.
• 10
• 10
• 10
• 20
• 20
• 30
Trg
• 10 1
• 10 2
• 10 3
• 20 1
• 20 2
• 30 1
Scenario 8:
I/P
• a,1
• b,2 Roll up null key ,join null key

• c,3
O/P
• abc,1
• abc,2
• abc,3
Scenario 9:
Separate Out Unique and Duplicate Records
Output 1
Ip File ID Name Count
ID Name 1 a 1
1 a 3 c 1
2 b 6 f 1
2 b
3 c Output 2
4 d ID Name Count
4 d 2 b 2
4 d 4 d 3 Rollup with key as id .to find count of group use agreegate
5 e 5 e 2 function count(in.id) .
5 e In FBE in.count==1
6 f
Scenario 10:Calculate runs per over.
Output File
Input File Over_no Total_runs
1 (Sum of first over)
Ball_no Runs 2 (Sum of second over)
1 1
2 1
3 0
4 0
5 6
6 1
1 0
2 4
3 1
4 0
5 0
6 1
Scenario 11:
Input File Output File
Ball_no Runs Over_no Total_runs
1 1 1 (Sum of first over)
2 1 2 (Sum of second over)
3 0
4 0
5 6
6 1
7 0
8 4
9 1
10 0
11 0
12 1
Scenario 12:
Input File

ID Name Sal
101 abc 10000
102 xyz 12000
103 pqr 15000
104 lmn 10000
105 jkl 20000
106 wxy 18000

Output File
ID Name Sal Cumulative Sal
101 abc 10000 10000
102 xyz 12000 22000
103 pqr 15000 37000
104 lmn 10000 47000
105 jkl 20000 67000
106 wxy 18000 85000
Scenario 13:
Input File 1
ID
1
2
3
Input File 2
Name
a
b
c
Output:
Record
1,a
2,b
3,c
(Hint : Use one of the transform
components)
Scenario 14:

• Finding nth highest salary from employee table using rollup


component. The graph should be generic enough where n can be a
user input.
Scenario 15:
Source:

TABLE_NAME VALUE

TABA 1
TABA 2
TABA 3
TABA 4
TABB 7
TABB 8
TABC 1
TABC 2
TABC 5

Output:

TABLE_NAME VAL1 VAL2 VAL3 VAL4


TABA 1 2 3 4
TABB 7 8
TABC 1 2 5
Scenario 16:
input is -

Card No Amount Date


101 1000 1-1-15
101 1500 1-2-15
101 1000 1-3-15
102 1000 2-1-15
102 2000 2-2-15
102 3000 2-3-15

Output should be -
Card No Amount Date
101 1000 1-1-15
101 1500 1-2-15
101 1000 1-3-15
101 3500
102 1000 2-1-15
102 2000 2-2-15
102 3000 2-3-15
102 6000
Scenario 17:
the following input value which is of
string(11) --> AAAA1014CSE"
auto assign the values to the output
dml fields. how can we do that? i
dont want to use string substring and
type cast for each field.

output dml:

record
string(4) name;
decimal(4) rollnum;
string(3) dept;
end;

output record:
name = AAAA
rollnum=1014
dept=CSE
Scenario 18:
Input File 1
ID
1
2
3
Input File 2
Name
a
b
c
Output:
Record
1,a
2,b
3,c
(Hint : Use one of the transform
components)
Scenario 19:
You have an input
file with 1 record
as below
• 1,2,3,4,5,6
Expected output
is
• 1
• 2
• 3
• 4
• 5
• 6
Scenario 20:
You have to assign unique sequence
number to group of records which is
based on a key.

Suppose accnt_num, date is as Key=name


below:
abc, 01/01/2015
abc, 01,02,2015

def, 01/01/2015
def, 01,02,2015

Then assign seq no. like below:

1,abc, 01/01/2015
1,abc, 01,02,2015

2,def, 01/01/2015
2,def, 01,02,2015
Scenario 21:
get a single record which contains a series of two digit numbers
without any delimiter like:
• 10102020203040404040
This series might go on, my graph should count the occurrences of each
unique two digit number and show it with its count, like:
• 10 : 2
• 20 : 3
• 30 : 1
• 40 : 4
Scenario 22:
Input File :
• 101,krishna
• 102,surya
• 103,asha
Output :
• 101,anhsirk
• 102,ayrus
• 103,ahsa
Output:
Scenario 23: Date|Balance_Amt
=======================================
20160401|110.00
Customer Account Details: 20160402|120.00
20160403|120.00
20160404|120.00
Date|Balance_Amt
20160405|120.00
============================
20160406|1120.00
============ 20160407|1120.00
20160401|110.00 20160408|1120.00
20160402|120.00 20160409|1120.00
20160406|1120.00 20160410|2000.00
20160410|2000.00 20160411|2100.00
20160411|2100.00 20160412|2100.00
20160420|3200.00 20160413|2100.00
--------------------------------------- 20160414|2100.00
20160415|2100.00
20160416|2100.00
20160417|2100.00
20160418|2100.00
20160419|2100.00
20160420|3200.00
---------------------------------------
Do it using not more than one transform component.
Scenario 24:

Source_City|Destination_City|Distance
=======================================
= Output:
DELHI|PUNE|1500
BANGALURU|MUMBAI|900 Source_City|Destination_City|Distance
PUNE|DELHI|1500 ========================================
DELHI|KOLKATA|1200 DELHI|PUNE|1500
CHENNAI|MUMBAI|1350 BANGALURU|MUMBAI|900
KOLKATA|DELHI|1200 DELHI|KOLKATA|1200
--------------------------------------- CHENNAI|MUMBAI|1350

Since route from CityA to CityB is same as CityB


to CityA. Remove such duplicate routes. Do it
using
a)only one transform component.
b)Not using sort component
Scenario 25:
Name|Designation
======================================== Output:
Sagar|Analyst
Raj|Analyst Vowel|Count
Tahir|Analyst
=======================
Dhanashree|Analyst
a|<count of all a/As>
Ravi|Analyst e|<count of all e/Es>
Vignesh|Consultant i|<count of all i/Is>
Nirbhay|Director o|<count of all o/Os>
Trevor|Chief Executive Officer
u|<count of all u/Us>
Neeraj|Director
---------------------------------------
Scenario 27:
INPUT FILE
category name
1 a
3 b
2 c
1 x
4 y
2 n
4 p
1 q
3 z
1 l

Output File
category names
1 axql
2 cn
3 bz
4 yp
In sort key = num ascending
Scenario 28:
input
abcdefgh:-row1(string)
1234567:-row2
o/p:
col1 col2
a 1
b 2
c 3
d 4
e 5
f 6
Scenario 29:
print this parttern (abinitio)
*
***
*****
********
**********
Scenario 30:
i/p:
a1b2c3d4e5f6g7h8
o/p:
a 1
b 2
c 3
d4
e 5
f 6
g 7
h 8
Scenario 31:
Input file:
1,1
2,2
3,3
4,4
6,6
Output file:
1,6
2,5
3,4
4,3
6,1
using only any one component (except input and
out put components)
Scenario 32:
Input file: Output file:
1
2 1
3 2
4 3
5 4
6 5
7 6
8 7
9 8
10 9
11 10
12 10
13 11
14 12
15 13
16 14
17 15
18 16
19 17
20 18
21 19
22 20
23 20
24 21
25 22
Scenario 33:
Input file: Output file:
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 55
12 11
13 12
14 13
15 14
16 15
17 16
18 17
19 18
20 19
21 20
22 155
23 21
24 22
23
24
Scenario 34:
using only reformat

Input file:
rec
1
2
3
4
5
Output file:
rec count
1 5
2 5
3 5
4 5
5 5

Ex
using normalize
1.Rows into Columns
2.Columns into Rows
Scenario 35:

Input file:

• no sal
• 101 100
• 102 200
• 103 300

Output file:
• no sal pre_sal
• 101 100 0
• 102 200 100
• 103 300 200
Scenario 36:
Input file :
• no name amount
• 101 xxx 100
• 102 yyy 200
• 103 tyh 300
• 104 hgj 400
• 105 fgh 500
• 106 ujh 600

Requirement is to print 3rd,4th and 6th records.

Output file :
• no name amount
• 103 tyh 300
• 104 hgj 400
• 106 ujh 600
Scenario 37:
Input file:
• col1 col2
•1 A
•2 B
•3 C

Output file:
• 123
• ABC
• Scenario 38:
• Input file:

• cust_id no_of trans vector



• trans_date amount
• 101 2 13-04-2017 300
• 09-02-2016 200
• 102 1 11-02-2017 150

• Output file:
• cust_id no_of trans trans_date amount
• 101 2 13-04-2017 300
• 101 2 09-02-2016 200
• 102 1 11-02-2017 150
• Scenario 39:
• Input file:

• A,10
• B,20
• A,10
• C,5
• B,5

• Output fi le : (cumulative sum)

• A,10
• A,20
• B,20
• B,25
• C,5

• Condi tion is we should not use SCAN component.


• Scenario 40:
• Input file:

• 1
• 2
• 3
• 4
• .

• 20

• Output fi le:

• 1
• 2
• .

• .
• 10

• 55
• 11
• 12

• 13
• .
• .
• 20
• 155
• In the a bove scenario, requirement is to display s um of every 10 records.
Scenario 41:

Display Header & Trailer records without using next_in_sequence() and Dedup Sorted Component.
• Scenario 42:
• 1
• 11
• 111
• 1111
• Scenario 43:(Hollow Right Angled Triangle):
• 1
• 11
• 1 1
• 1 1
• 11111
Scenario 44:Equilateral Triangle(with &
without spaces)
Scenario 45:Pascal's Triangle
• Scenario 46:
Output File
• Customer File:

cust_id name month_no salary vector


• cust_id name salary date amount
123 abc 1 12000 250,12-01-2017
• 123 abc 12000 12-01-2017 250 500,14-01-2017
• 123 abc 12000 14-01-2017 300 500,15-01-2017
• 123 abc 12000 15-01-2017 500 900,17-01-2017
• 456 xyz 15000 12-01-2017 450 123 abc 2 12000 550,14-02-2017
• 456 xyz 15000 13-01-2017 250 770,25-02-2017
• 123 abc 12000 17-01-2017 900 456 xyz 1 15000 450,12-01-2017
• 123 abc 12000 14-02-2017 550 250,13-01-2017
• 456 xyz 15000 21-01-2017 250 250,21-01-2017
• 456 xyz 15000 20-02-2017 250 456 xyz 2 15000 250,20-02-2017
• 123 abc 12000 25-02-2017 770
• 123 abc 12000 14-01-2017 200
Output File:

cust_id name salary vector


month_no amt
123 abc 12000 1 2150
2 1320
3 0
• Scenario 47: 4 0
• Customer File: 5 0
6 0
7 0
• cust_id name salary date amount 8 0
9 0
• 123 abc 12000 12-01-2017 250 10 0
• 123 abc 12000 14-01-2017 300 11 0
12 0
• 123 abc 12000 15-01-2017 500 456 xyz 15000 1 950
• 456 xyz 15000 12-01-2017 450 2 250
3 0
• 456 xyz 15000 13-01-2017 250
4 0
• 123 abc 12000 17-01-2017 900 5 0
6 0
• 123 abc 12000 14-02-2017 550
7 0
• 456 xyz 15000 21-01-2017 250 8 0
• 456 xyz 15000 20-02-2017 250 9 0
10 0
• 123 abc 12000 25-02-2017 770 11 0
• 123 abc 12000 14-01-2017 200 12 0
• Scenario 48:

• develop Unix script to achieve the below requirement.

• One directory has the number of .dat files coming every day. Each file has some data. You have to generate the .ctl files for each data file, which should have the file_name|date|count _from_fil e|

• e.g.

• cd /home/id/dat-----List of dat files

• ls -l

• a.dat

• b.dat

• c.dat

• O/P:

• cd /home/id/ctl----list of control files to be generated based on dat files

• ls -l

• a.ctl

• b.ctl

• c.ctl

• I/P data:------data in dat file for one file

• cat a.data

• ssdjh

• sfdgfgh

• arrehytyku

• ettrjtjy

• O/P data:

• cat a.ctl------Generated control file for above data file

• a.dat|20170417|4|
• Scenario 49:
• It s hould accept the Todays_date(like 2017-05-09) as Input dynamically a nd should s tore the .ctl files i n “/home/id/ctl/today's_date/”

• e.g:

• cd /home/id/data/today's_date/-----List of dat files


• l s -l
• a .da t

• b.da t
• c.da t

• O/P:

• cd /home/id/ctl/today's_date/----list of control files to be generated based on dat files


• l s -l
• a .ctl

• b.ctl
• c.ctl

• I/P da ta:------data in dat file for one file


• ca t a .data
• s s djh
• s fdgfgh
• a rrehytyku
• ettrjtjy

• O/P da ta:
• ca t a .ctl ------Generated control file for a bove data file
• a .da t|20170417|4|
• Scenario 50:

• *****
• ****
• ***
• **
•*
Scenario_51

Input: output:
1,100
2,100
2,100
2,200
3,100
3,100
3,200
3,200
3,300
3,300
3,300
3,300
3,400
4,100
Scenario_52 Output1:

Input:
Rambabu
Cherukuri

Output:
Scenario_53 Filling gaps
Input
1
length= if(string_index(in.data,"-")>0) (string_split_no_empty(in.data,"-")[1]
2 string_split_no_empty(in.data,"-")[0])+1 else 1;
3-6
7 normalize:
8-10
if(length_of(in.data)==1) in.data (string_split_no_empty(in.data,"-")[0]+index
Output:
1
2
3
4
5
6
7
8
9
10
Scenario_54

I/p
11
13
18
17
12
19
14 Or
20 Rollup:

Out.data:Vector_sort(accumulation(in.input)
O/p
11,12
Normalize component
13,14 lenth=length_of(in.data)/2
17,18
19,20 normalize

out=string_concat(input[2*index],",",input[2*index+1]
Scenario_55

I/p
1
2
3
4
5

O/p
1
11
101
1001
10001
• if(index == 0)in.emp_name[0]
• else if (index == 1)in.emp_name[1]
• else in.emp_name[2]
• if(index == 0)in.emp_no[2]
• else if (index == 1)in.emp_no[1]
• else in.emp_no[0]

You might also like