0% found this document useful (0 votes)
22 views51 pages

CD MQP Solved

The document provides an overview of compilers, their working process, and applications. It explains concepts like tokens, patterns, and lexemes, along with common programming errors and error recovery strategies in compiler design. Additionally, it discusses left recursion elimination in grammars and evaluates whether a specific grammar is LL(1).

Uploaded by

wanegaonaditya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views51 pages

CD MQP Solved

The document provides an overview of compilers, their working process, and applications. It explains concepts like tokens, patterns, and lexemes, along with common programming errors and error recovery strategies in compiler design. Additionally, it discusses left recursion elimination in grammars and evaluates whether a specific grammar is LL(1).

Uploaded by

wanegaonaditya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Module -1

1 a. What is a Compiler? Explain the working of


a Compiler with your own example?
What is a Compiler?

A compiler is a special program that translates code written in a


high-level programming language (like C, C++, or Java) into
machine code or assembly language that a computer's processor can
execute.

It performs this translation all at once before the program is run,


unlike an interpreter, which translates and executes code line-by-
line.

Working of a Compiler:

The compilation process typically happens in multiple stages:

1. Lexical Analysis

o Breaks the source code into tokens (keywords, identifiers,


symbols).

o Example: int a = 5; → tokens: int, a, =, 5, ;

2. Syntax Analysis (Parsing)

o Checks if the tokens form a valid structure (grammar of


the language).

o Builds a parse tree or abstract syntax tree (AST).

3. Semantic Analysis
o Checks for logical errors, such as type checking and
variable declarations.

o Example: Ensuring a = "hello"; is invalid if a is declared as


an int.

4. Intermediate Code Generation

o Generates an intermediate representation (IR) of the code


that is easy to optimize.

5. Code Optimization

o Improves the intermediate code to run faster or use less


memory.

6. Target Code Generation

o Converts the optimized code into machine code for the


target CPU.

7. Code Linking and Assembly

o Links external libraries and prepares the final executable


program.

Example:

High-level Code (in C):

int main() {

int a = 10;

int b = 20;

int sum = a + b;

return sum;
}

1 b. List and explain applications of Compiler


design.
Applications of Compiler Design

(Write these 8 points briefly in your exam for full marks)

1. Language Translation

o Converts high-level language (like C, Java) into machine


language.

o Helps the computer understand the code.

2. Code Optimization

o Improves the speed and performance of the program.

o Removes unnecessary or repeated code.

3. Error Detection

o Finds syntax and semantic errors in the code before


execution.

o Helps programmers fix mistakes early.

4. Software Development Tools

o Used in IDEs like Code::Blocks, Eclipse, etc.

o Helps with debugging and auto-completion.

5. Embedded Systems

o Compiles code for small devices like washing machines or


robots.
o Generates efficient code for limited memory.

6. Interpreters and Virtual Machines

o Helps build interpreters for Python, JVM for Java.

o Uses compiler techniques to run programs.

7. Just-In-Time (JIT) Compilation

o Used in Java and .NET to compile code at runtime.

o Makes programs faster during execution.

8. Cross Compilation

o Compiles code for another system (e.g., Android apps


from a PC).

o Useful in mobile and embedded app development.

1 c. Write a note on productivity tools.


Common Types of Productivity Tools:

1. Word Processing Tools

o Example: Microsoft Word, Google Docs

o Used for creating and editing text documents.

2. Spreadsheet Tools

o Example: Microsoft Excel, Google Sheets

o Used for data entry, analysis, and calculations.

3. Presentation Tools

o Example: Microsoft PowerPoint, Google Slides

o Used for making visual presentations and slideshows.


4. Email and Communication Tools

o Example: Microsoft Outlook, Gmail, Slack

o Used for sending messages and collaborating with others.

5. Task and Time Management Tools

o Example: Trello, Todoist, Google Calendar

o Help plan, track, and organize tasks and schedules.

Benefits of Productivity Tools:

• Save time and effort

• Improve work quality

• Enable better communication

• Help manage and organize information

2 a. Consider the context-free grammar.


S -> SS+ | SS* | a

i)Show how the string aa+a* can be generated by this grammar.

ii)Construct a parse tree for this string 8 marks

Given CFG:

S → SS+ | SS* | a

We need to:

i) Show how the string aa+a* can be generated

Target string: a a + a *
Let’s derive it step-by-step:

Step 1:
S → SS*
(We want the last symbol to be *)

Step 2:
Left S → SS+ (because we want + before *)
Right S → a

So:
S → (SS+) S* → (a a +) a *

Final string: aa+a*

ii) Parse Tree for aa+a*

/ \

S S

/ \ |

S S (→ a)

| |

(a) (a)

*
Or, structured as:

/ \

S S

/\ |

S S a

| |

a a

This tree shows how the string aa+a* is derived using the
grammar rules.

2 b. With a neat diagram explain Language


processing system
Language Processing System (Easy Explanation)

A language processing system is a set of programs that helps convert


a high-level language (like C, C++) into machine code that a
computer can understand and run.

Main Steps in Language Processing System:


1. Preprocessor

o Removes comments and expands macros (e.g. #include,


#define)

2. Compiler

o Checks the code for errors and converts it into assembly


code.

3. Assembler

o Converts assembly code into object code (machine-level


code).

4. Linker

o Joins object code with library files to create an executable


file.

5. Loader

o Loads the executable file into memory and starts running


the program.

Simple Diagram:

Source Code

Preprocessor

v
Compiler

Assembler

Linker

Loader

Program Execution

2 c. Write a short note on Interpreter.


Short Note on Interpreter

An interpreter is a program that reads and executes code line by


line. Unlike a compiler, it does not create a separate executable file.

Features of Interpreter:

• Translates one line at a time

• Stops when it finds an error

• Slower than compiler but easier for testing and debugging


• Good for scripting languages like Python, JavaScript

Example:

If you write a Python program:

print("Hello")

x=5+3

The interpreter runs each line one by one.

Module -2
3 a . Explain tokens, patterns, and lexemes.
Demonstrate the same with examples.
Tokens, Patterns, and Lexemes

In compiler design, especially during lexical analysis, we deal with


tokens, patterns, and lexemes.

1. Token

• A token is a category or class of a set of strings with similar


meaning.

• It is the output of the lexical analyzer.


• Example tokens:

o Keyword (int, if, while)

o Identifier (x, sum)

o Operator (+, -, *)

o Number (123, 45.6)

2. Lexeme

• A lexeme is the actual text or word from the source code that
matches a token.

• It is the instance of a token in the program.

Example:
In this line of code:

int x = 10;

Lexemes are: int, x, =, 10, ;

3. Pattern

• A pattern is a rule or structure used to identify lexemes for a


token.

• Usually written using regular expressions.

Example Patterns:

• For identifier: letter (letter | digit)*

• For number: digit+


Example Summary:

Let’s take this line of code:

int total = 100 + 20;

Lexeme Token Pattern

int Keyword int, float, char etc.

total Identifier `letter(letter

= Assignment Op =

100 Number digit+

+ Operator +, -, *, /

20 Number digit+

; Delimiter ;

3 b. Discuss different types of common


programming errors.
Common Types of Programming Errors

There are mainly 3 types of programming errors:

1. Syntax Errors

• What is it? Mistakes in the grammar of the programming


language.
• Example:
if (x > 5 ← Missing closing )

• Caused by: Typos, missing semicolons, brackets, etc.

• Detected by: Compiler or interpreter during compilation.

2. Runtime Errors

• What is it? Errors that occur while the program is running.

• Example:
Dividing by zero → x = 10 / 0

• Caused by: Invalid user input, unavailable files, dividing by


zero, etc.

• Detected by: Program crashes or throws exceptions during


execution.

3. Logical Errors

• What is it? The program runs, but the output is wrong or not as
expected.

• Example:
Calculating average using sum / 2 instead of sum / total_count

• Caused by: Wrong formulas, incorrect conditions, or wrong


logic.

• Detected by: Carefully checking output or using test cases.

Extra (Optional for 8 marks):


4. Semantic Errors

• What is it? Code follows syntax but doesn’t do what the


programmer intends.

• Example: Using the wrong variable in a calculation.

Summary Table:

When
Error Type Example Fix
Detected

Syntax Error Compile time Missing ) Fix code syntax

Runtime During Handle input, add


Divide by 0
Error execution checks

Logical
After output Wrong result Correct logic
Error

Semantic Wrong meaning in Understand problem


Hard to catch
Error code deeply

3 c. Write and Apply an algorithm to eliminate left


recursion from following grammar. S A a | b A A
c|Sd|ε
Apply Algorithm to the Given Grammar

Step 1: Eliminate Left Recursion from A

A→Ac|Sd|

Break into:
• Left-recursive: A → A c

• Non-left-recursive: A → S d, ε

Now eliminate left recursion:

Define a new non-terminal A′:

A → S d A′ | A′

A′ → c A′ | ε

Step 2: Update S using the new A

From:

S→Aa|b

Now substitute the new A:

Since A → S d A′ | A′, this affects S, but we keep S in terms of A:

Final Grammar:

S→Aa|b

A → S d A′ | A′

A′ → c A′ | ε

4 a. Write the transition diagram that recognizes the


lexemes matching the token Relation
Operator(relop) and identifiers.
✦ Transition Diagram to Recognize:

1. Relational Operators (relop)

2. Identifiers (id)
1. Relational Operators (relop)

Relational operators include:


<, <=, >, >=, ==, !=

Transition Diagram for relop:

Start

|-- '<' --> [state1] -- '=' --> [state2] (accept "<=")

| |

| --> (accept "<")

|-- '>' --> [state3] -- '=' --> [state4] (accept ">=")

| |

| --> (accept ">")

|-- '=' --> [state5] -- '=' --> [state6] (accept "==")

|-- '!' --> [state7] -- '=' --> [state8] (accept "!=")

2. Identifiers (id)

Identifiers are usually:

• Start with a letter (A–Z or a–z)


• Followed by letters or digits

Transition Diagram for id:

Start

|-- Letter --> [state1] -- (Letter/Digit) --> [state1] (loop)

(accept identifier)

Final Answer (Clean Summary):

Transition Diagram for relop:

• <, <=, >, >=, ==, != recognized using specific character


sequences and transitions.

Transition Diagram for id:

• Starts with a letter, followed by any number of letters/digits.

4 b. Discuss different error recovery strategies.


Error Recovery Strategies in Compiler Design

When a compiler detects an error during syntax analysis, it uses


error recovery strategies to continue parsing and report more
errors in one go.

✦ 1. Panic Mode Recovery


• Idea: Skip input symbols until a synchronizing token (like ; or
}) is found.

• Fast and simple, avoids infinite loops.

• May skip too much input.

Example: If an error is found in a statement, skip until ; to


resume parsing from the next statement.

✦ 2. Phrase-Level Recovery

• Tries to correct the error by inserting, deleting, or replacing


symbols.

• Tries local correction for minor errors.

• Can be complex and might introduce incorrect assumptions.

Example: Missing ) is added if an opening ( was found.

✦ 3. Error Productions

• Extend grammar with error rules to handle common mistakes.

• Parser detects and handles known error patterns.

Example:

stmt → error ';'

If anything wrong is found in a statement, it matches error and


continues.
✦ 4. Global Correction

• Uses algorithms to find minimum number of changes to make


input valid.

• Very accurate but slow and expensive, not used in practice.

Summary Table:

Strategy Description Usage

Simple,
Panic Mode Skips symbols until a safe point
Fast

Local corrections (insert/delete More


Phrase-Level
tokens) accurate

Error Add grammar rules for Easy to


Productions common errors apply

Global Tries best correction with Not


Correction fewest changes practical

4 c. Eliminate left recursion from the given


grammar
E→E+A|A

A→AB|B

B→B#|C

C→a|b
Step 1: Eliminate left recursion in E → E + A | A

This is left-recursive because it starts with E.

Use the standard transformation:

Original:

E→E+A|A

Transform:

E → A E'

E' → + A E' | ε

Step 2: Eliminate left recursion in A → A B | B

Also left-recursive.

Original:

A→AB|B

Transform:

A → B A'

A' → B A' | ε

Step 3: Eliminate left recursion in B → B # | C

Again, left-recursive.

Original:

B→B#|C
Transform:

B → C B'

B' → # B' | ε

Step 4: C → a | b has no left recursion, so keep it as is.

Final Grammar (Left Recursion Eliminated):

E → A E'

E' → + A E' | ε

A → B A'

A' → B A' | ε

B → C B'

B' → # B' | ε

C→a|b
Module -3
5 a.

Question:
Is the grammar
G={
S→L=R|R
R→L
L → *R | id
}
an LL(1) grammar?

Step-by-Step Answer:

1. Check for Left Recursion

There is no direct left recursion, but we check further:

• S→L=R|R

• R→L
So S → L = R | L indirectly.

This creates ambiguity — both rules can begin with L.


2. Find FIRST sets

We start with:

L → *R | id ⇒ FIRST(L) = { *, id }

R→L ⇒ FIRST(R) = FIRST(L) = { *, id }

S → L=R | R ⇒ FIRST(L=R) = { *, id }, FIRST(R) = { *, id }

So,

FIRST(S → L=R) = { *, id }

FIRST(S → R) = { *, id }

→ Both alternatives of S start with the same FIRST set:


This violates LL(1) rule (FIRST sets must not overlap).

Conclusion:

No, the grammar is not LL(1)

Because:

• FIRST sets conflict for S → L = R and S → R

• Parser cannot decide which rule to choose based on a single


lookahead

• So it is not suitable for predictive (LL(1)) parsing

Final Answer (easy and exam-ready, 8 marks):


No, the grammar is not LL(1).
Because the FIRST sets of the two productions for S → L = R and
S → R both contain { *, id }, they overlap.
This creates a parsing conflict, meaning the parser cannot choose
a production with just one symbol of lookahead.
Therefore, the grammar is not LL(1).

5 b. Explain recursive descent parsing with example.


Answer:

Recursive Descent Parsing is a top-down parsing technique where:

• Each non-terminal in the grammar is implemented as a


recursive function

• It tries to match input tokens using grammar rules

• If a rule matches, it moves ahead; otherwise, it backtracks or


fails

How it works:

• The parser reads the input left to right

• It builds the parse tree from the top (start symbol) down

• Uses recursive functions for each rule

Example Grammar:

E→T+E|T
T → id

Input:

id + id

Recursive Functions:

function E() {

T()

if (nextToken == '+') {

match('+')

E()

function T() {

match('id')

Steps:

1. E() calls T() → matches id

2. sees + → matches +

3. calls E() again → matches id


Input matched → parsing successful!

5 c. Explain shift reduce parsing technique with


example
Answer:

Shift-Reduce Parsing is a bottom-up parsing technique.

It tries to reduce the input string to the start symbol of the grammar
using a stack and a set of grammar rules.

Steps in Shift-Reduce Parsing:

1. Shift: Push the next input symbol onto the stack

2. Reduce: Replace symbols on the stack with a grammar rule’s


left-hand side

3. Accept: If the stack has only the start symbol and input is empty

4. Error: If no valid move can be made

Example Grammar:

E→E+T|T

T → id

Input:

id + id

Parsing Steps:
Stack Input Action

id + id Shift

id + id Reduce id → T

T + id Reduce T → E

E + id Shift

E+ id Shift

E + id Reduce id → T

E+T Reduce E + T → E

E Accept

6 a. Show that the following grammar is LL(1).


Given Grammar:
S→A
A→aB|Ad
B→bBC|f
C→g

Step 1: Check for Left Recursion

Look at:
less

Copy code

A→aB|Ad

→ This is left-recursive, because A calls itself directly on the left


side (A → A d).

So the grammar is not LL(1)

LL(1) grammars must NOT have left recursion.

Final Answer (Simple and Scoring – 8 marks):

No, the grammar is not LL(1) because it contains left recursion in


the production:

A→Ad

In LL(1) grammars, left recursion is not allowed because it causes


infinite recursion in top-down parsers.
Therefore, this grammar is not suitable for LL(1) parsing.

6 b. Write the procedure to compute first and follow


of the given grammar.
Procedure to compute FIRST set:

The FIRST(X) of a symbol X is the set of terminals that begin the


strings derivable from X.

Steps:

1. If X is a terminal, then FIRST(X) = { X }


2. If X is a non-terminal and has a production like X → ε, then
add ε to FIRST(X)

3. If X → Y₁ Y₂...Yn:

o Add FIRST(Y₁) to FIRST(X),

o If FIRST(Y₁) contains ε, check Y₂, and so on

o If all Y₁ to Yn can derive ε, then add ε to FIRST(X)

Procedure to compute FOLLOW set:

The FOLLOW(A) of a non-terminal A is the set of terminals that can


appear immediately after A in some sentential form.

Steps:

1. Place $ (end of input) in FOLLOW(S), where S is the start


symbol

2. For a production A → α B β:

o Add FIRST(β) − {ε} to FOLLOW(B)

3. If A → α B or A → α B β and FIRST(β) contains ε:

o Add FOLLOW(A) to FOLLOW(B)

4. Repeat until no more symbols can be added

Example (Optional to Add for Clarity):

For grammar:

S→AB
A→a|ε

B→b

• FIRST(A) = { a, ε }

• FIRST(B) = { b }

• FIRST(S) = { a, b }

• FOLLOW(S) = { $ }

• FOLLOW(A) = { b } (because A is followed by B)

• FOLLOW(B) = { $ }

6 c. Explain handle pruning with example


Answer:

Handle Pruning is a technique used in bottom-up parsing,


especially in shift-reduce parsers, to build the parse tree from
leaves to the root.

What is a Handle?

A handle is:

• A substring of the input

• That matches the right-hand side (RHS) of a grammar


production

• And can be reduced to a non-terminal (left-hand side of the


rule)
Handle Pruning Process:

1. Scan the input from left to right

2. Find a handle (a pattern that matches RHS of a rule)

3. Reduce it to the left-hand side (non-terminal)

4. Repeat until you get the start symbol

Example:

Grammar:

E→E+T|T

T → id

Input string: id + id

Steps:

1. id → matches T → id ⇒ reduce to T

2. T → matches E → T ⇒ reduce to E

3. + → shift

4. id → reduce to T

5. E + T → matches E → E + T ⇒ reduce to E

6. Done — reduced to start symbol!


Module -4

7 a. Show that the following grammar is SLR(1


E→E+T|T
T→T*F|F
F → ( E ) | id

Step-by-step Explanation:

To show that a grammar is SLR(1), we need to follow these steps:

Step 1: Augment the Grammar

Add a new start symbol:

mathematica

Copy code

E' → E

E→E+T|T

T→T*F|F

F → ( E ) | id

Step 2: Construct LR(0) Items


This involves creating canonical collection of LR(0) items (states
with dotted productions).
(For exam, just mention this step — no need to draw all states unless
asked.)

Step 3: Compute FOLLOW sets

We use these for reduce actions in SLR(1).

FIRST sets:

FIRST(E) = { (, id }

FIRST(T) = { (, id }

FIRST(F) = { (, id }

FOLLOW sets:

ruby

Copy code

FOLLOW(E) = { ), $, + }

FOLLOW(T) = { ), $, +, * }

FOLLOW(F) = { ), $, +, * }

Step 4: Build SLR(1) Parsing Table

• Use LR(0) items for shift/reduce states

• Use FOLLOW sets to decide reduce actions

• No conflicts (no shift/reduce or reduce/reduce) must be found


In this grammar, all reduce actions go to FOLLOW sets of the
reduced non-terminal.
All shift and reduce actions do not conflict.

Conclusion:

The grammar is SLR(1) because:

• It has no left recursion

• The parsing table built using LR(0) items + FOLLOW sets has
no conflicts

• Therefore, it is suitable for SLR(1) parsing

Final Answer (Exam Ready — 8 marks):

Yes, the grammar is SLR(1).


By augmenting the grammar, constructing LR(0) items, and
computing FOLLOW sets, we can build the SLR(1) parsing table.
Since there are no shift/reduce or reduce/reduce conflicts, the
grammar is valid for SLR(1) parsing.

7 b. What is dependency graph? Write dependency


graph for the expression 3 * 5 with suitable top-
down grammar.
Answer:

What is a Dependency Graph?

• A Dependency Graph is a diagram used in syntax-directed


translation.
• It shows how the attributes of grammar symbols depend on
each other.

• Helps in deciding the order of evaluation of values.

• Nodes = grammar symbols with attributes

• Edges = dependency between them

Grammar used (Top-down, without left recursion):

E → T E'

E' → * T E' | ε

T → num

Expression to parse: 3 * 5

We apply the grammar:

→ T E' (T = 3, E' = * 5)

→ num E'

→3*5

Attribute Information (assume .val is the value):

• T1.val = 3

• T2.val = 5

• E'.val = T1.val * T2.val = 3 * 5 = 15


• E.val = E'.val = 15

Dependency Graph:

T1.val = 3 T2.val = 5

\ /

\ /

E'.val = T1.val * T2.val

E.val = E'.val

8 a. Construct LR(0) items and parsing table for the


following grammar.
S→CC

C→cC

C→d

Step 1: Augment the Grammar

Add a new start symbol S' → S

S' → S

S→CC

C→cC

C→d
Step 2: LR(0) Items (States)

We build LR(0) items by placing a dot (·) in each production to show


parsing progress.

I0 (Initial State):

S' → · S

S→·CC

C→·cC

C→·d

• On S → go to I1

• On C → go to I2

• On c → shift to I3

• On d → shift to I4

I1:

S' → S · Accepting state

I2:

S→C·C

C→·cC

C→·d

• On C → I5
• On c → I3

• On d → I4

I3:

C→c·C

C→·cC

C→·d

• On C → I6

• On c → I3

• On d → I4

I4:

C→d· (Reduce by C → d)

I5:

S→CC· (Reduce by S → C C)

I6:

C→cC· (Reduce by C → c C)

Step 3: LR(0) Parsing Table

Terminals: c, d, $
Non-terminals: S, C
State c d $ S C

0 S3 S4 1 2

1 acc

2 S3 S4 5

3 S3 S4 6

4 R3 R3 R3

5 R1 R1 R1

6 R2 R2 R2

Legend:

• S = Shift

• R = Reduce

• acc = Accept

• R1: S → C C

• R2: C → c C

• R3: C → d

8 b. Write SDD for simple type declarations. Also


write dependency graph for a declaration int
id1,id2,id3.

1. What is SDD?
• Syntax-Directed Definition (SDD) is a grammar with rules that
define how to compute attribute values.

• Used in compilers to attach meaning (semantics) to syntax.

2. Grammar for Simple Type Declarations

D →TL

T → int

L → id

L → L , id

3. Attributes

Let’s define an attribute L.type to store the type (int, etc.)

We define rules like:

• T.type = "int"

• L.type = T.type

• For lists: each id in the list gets L.type

4. SDD (Syntax-Directed Definition)

D→TL { L.type = T.type }

T → int { T.type = "int" }

L → id { print("id has type", L.type) }

L → L1 , id { print("id has type", L.type) }


5. Example Input:

int id1, id2, id3;

Parse Tree:

Module – 5
9 a . Write a list of the common three address
instruction forms with example.
Three Address Code (TAC) is an intermediate code used in
compilers.
Each instruction has at most three addresses (operands).

Common Three Address Instruction Forms:

Assignment Instruction

x = y op z

Example:

t1 = a + b

Copy Instruction

Format:
x=y

Example:

t2 = t1

Unary Operation

Format:

x = op y

Example:

t3 = -a

Conditional Jump

Format:

if x relop y goto L

Example:

if a < b goto L1

Unconditional Jump

Format:

goto L

Example:

goto L2
Label Definition

Format:

L:

Example:

L1:

Indexed Assignment

Format:

x = y[i]

x[i] = y

Example:

t4 = A[i]

B[i] = t4

Pointer Assignment

Format:

x = *y

*x = y

Example:

t5 = *p
*q = t5

Function Call and Return

Format:

param x

call f, n

return x

Example:

param a

call max, 2

return t6

Final Summary (10 Marks):

Form Type Example

Assignment t1 = a + b

Copy t2 = t1

Unary Op t3 = -a

Conditional Jump if a < b goto L1

Unconditional Jump goto L2

Label L1:
Form Type Example

Indexed Assign t4 = A[i]

Pointer Assign

9 b. Write a note on the following


(i) Input to the code generator

(ii) The target program

(i) Input to the Code Generator

• The code generator is a part of the compiler that produces


machine code or assembly code.

• It takes intermediate code and other information to generate the


final code.

Inputs include:

1. Intermediate Code – Like three-address code (TAC)

2. Symbol Table – Contains info about variables, types, addresses

3. Register Info – For allocation of CPU registers

4. Runtime Support Info – Stack, heap, function calls, etc.

Example:

If intermediate code is:

t1 = a + b

The code generator may produce:

MOV R1, a
ADD R1, b

MOV t1, R1

(ii) The Target Program

• The target program is the final output of the compiler

• It is in the target language (machine code, assembly, or


bytecode)

Features:

1. It should be correct – same meaning as source

2. It should be efficient – fast and optimized

3. It must follow target machine rules

Example:

For input a = b + c;
Target code (assembly) might be:

xMOV R1, b

ADD R1, c

MOV a, R1

Final Summary (10 marks):

Topic Description

Input to Code Takes intermediate code, symbol table, and register


Generator info to produce machine code
Topic Description

The final program in machine/assembly language


Target Program
that runs on the hardware

10 a. Write SDD for flow of control statements.


Syntax Directed Definition (SDD) for Flow of Control
Statements

Flow of control statements include if-else, while, for, etc. SDD helps
in generating intermediate code or checking semantics during
compilation.

We'll show an SDD for a simple if-else and while statement using
syntax rules and semantic rules.

1. Grammar Rules with SDD for if-else

S → if ( E ) S1 else S2

Attributes:

• E.true, E.false: labels for true and false conditions

• S1.next, S2.next: labels for next instruction

• S.next: label where to go after if-else

Semantic Rules:

E.true = newlabel()

E.false = newlabel()

S1.next = S.next
S2.next = S.next

S.code = E.code ||

"if E.true goto L1" ||

"goto L2" ||

"L1: " || S1.code ||

"goto S.next" ||

"L2: " || S2.code

2. Grammar Rule with SDD for while Loop

S → while ( E ) S1

Attributes:

• begin: label at the start of loop

• E.true, E.false: labels for condition

• S1.next: where to go after the body

• S.next: exit label

Semantic Rules:

begin = newlabel()

E.true = newlabel()

E.false = S.next

S1.next = begin
S.code = "begin:" ||

E.code ||

"if E.true goto L1" ||

"goto S.next" ||

"L1:" || S1.code ||

"goto begin"

Explanation:

• newlabel() creates unique labels for jumps.

• This structure helps in generating intermediate code using


backpatching.

• || means concatenation of code.

10 b. Write a note on a simple target machine model


Simple Target Machine Model

A target machine is the platform (hardware or virtual) for which the


compiler generates code. A simple target machine model is used in
compiler design to help generate and optimize code effectively.

Key Features of a Simple Target Machine:

1. Memory Organization:

o Uses a stack, registers, and main memory.


o Variables are stored in memory locations.

o Instructions work on registers or memory.

2. Instruction Set:

o Simple instructions like:

▪ LOAD x → Load value of variable x into register.

▪ STORE x → Store register value into variable x.

▪ ADD x → Add value of x to register.

▪ SUB x → Subtract value of x from register.

▪ MUL x, DIV x → Multiply/Divide.

▪ JMP label → Unconditional jump.

▪ JZ label → Jump if zero.

▪ JNZ label → Jump if not zero.

3. Registers:

o Usually 1 or a few general-purpose registers (like R0,


R1).

o Temporary values are stored in registers.

4. Instruction Format:

o Typically 3-address, 2-address, or 1-address instructions.

o Example (3-address): ADD R1, R2, R3 means R1 = R2 +


R3.

5. Data Types:

o Supports basic types like int, float, char.


6. Control Flow:

o Uses labels and jump instructions to handle loops and


conditions.

Example Instruction Sequence:

For the statement: a = b + c

The target machine code could be:

LOAD b

ADD c

STORE a

Advantages of Using a Simple Model:

• Easy to understand and simulate.

• Helps in teaching code generation and optimization.

• Simplifies register allocation and instruction selection.

You might also like