0% found this document useful (0 votes)
5 views28 pages

Practical File: Computer Network and Security

The document is a practical file for a Computer Networks and Security course, detailing various programming experiments related to lexical analysis, parsing techniques, and compiler design. It includes aims, introductions, implementations, and conclusions for each experiment, covering topics such as lexical analyzers, LEX programs, recursive descent parsing, and LL(1) parsers. Each experiment is structured with code examples and explanations of the underlying theory and algorithms.

Uploaded by

cadetbro23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views28 pages

Practical File: Computer Network and Security

The document is a practical file for a Computer Networks and Security course, detailing various programming experiments related to lexical analysis, parsing techniques, and compiler design. It includes aims, introductions, implementations, and conclusions for each experiment, covering topics such as lexical analyzers, LEX programs, recursive descent parsing, and LL(1) parsers. Each experiment is structured with code examples and explanations of the underlying theory and algorithms.

Uploaded by

cadetbro23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

PRACTICAL FILE

BE (CSE) 6th Semester

Computer Network and Security

Submitted By
Tsewang Norboo
UE223109

Submitted To
Nitin Bhatia
Faculty, Computer Science and Engineering

Computer Science and Engineering


University Institute of Engineering and Technology
Panjab University, Chandigarh – 160014, INDIA 2024
Index

Sr Name of the Practical Page no. Signature


No.

1 Write a program for Lexical Analyser. 1-3

2 Given an input string for a regular expression. Obtain 3-4


the corresponding NFA with null-move.

3 To implement various LEX Program. 5-6

4 To implement Recursive Descent Parser 7-9

5 To implement first and follow and LL1 10-12

6 To implement Bottom up parser 13-15

7 To implement LR parser 16-18

8 To implement syntax-directed translation 19-22


Experiment No. 1

Aim : Write a program for Lexical Analyser.

Introduction

A Lexical Analyzer (Lexer) is the first phase of a compiler. It processes the source code and converts it into
tokens. A token is the smallest unit of a program that has meaning, such as keywords, identifiers, operators,
constants, and symbols.
The Lexical Analyzer works as a scanner that reads the input character stream and groups them into
meaningful sequences, which are then passed to the Syntax Analyzer (Parser) for further processing.

Functions of Lexical Analyzer


1. Tokenization – Converts the input source code into a sequence of tokens.
2. Removes Whitespaces and Comments – Ignores unnecessary spaces and comments.
3. Error Handling – Detects illegal tokens and lexical errors.
4. Symbol Table Management – Stores identifiers and keywords for reference.

Tokens in Lexical Analysis


The following are the common types of tokens:
1. Keywords – Reserved words (e.g., if, else, while, int).
2. Identifiers – Variable names, function names.
3. Operators – Arithmetic (+, -, *, /), Relational (>, <, ==), Logical (&&, ||).
4. Constants – Numeric values (123, 3.14).
5. Special Symbols – ;, {}, (), [].

Lexical Analysis Process


1. Read the Input Source Code character by character.
2. Identify the Lexemes (meaningful sequences).
3. Classify Lexemes into Tokens based on predefined patterns.
4. Send Tokens to the Parser for Syntax Analysis.
Program :
import re

token_patterns = [

(r'\b(if|else|while|for|return|int|float|char)\b', 'KEYWORD'),

(r'\b[A-Za-z_][A-Za-z0-9_]*\b', 'IDENTIFIER'),

(r'\d+', 'NUMBER'),

(r'[+\-*/=<>!]+', 'OPERATOR'),

(r'[(){},;]', 'SYMBOL'),

(r'\s+', None)]

def lexical_analyzer(code):

tokens = []

index = 0

while index < len(code):

match_found = False

for pattern, token_type in token_patterns:

regex = re.compile(pattern)

match = regex.match(code, index)

if match:

if token_type:

tokens.append((match.group(), token_type))

index = match.end()

match_found = True

break

if not match_found:

raise ValueError(f"Unexpected character at index {index}: {code[index]}")

return tokens

def print_token_table(tokens):

print("Lexeme\tToken")

print("-" * 20)

for lexeme, token in tokens:

print(f"{lexeme}\t{token}")

code_snippet = input("Enter : ")

tokens = lexical_analyzer(code_snippet)

print_token_table(tokens)
Output :

Conclusion
• Lexical Analysis is the first step of compilation.
• It converts a source code into tokens.
• Lex simplifies the implementation of a Lexical Analyzer by automatically generating token
recognition code.
Experiment No. 2

Aim : Given an input string for a regular expression. Obtain the corresponding NFA with null-move.

Introduction

A Lexical Analyzer (Lexer) is the first phase of a compiler. It processes the source code and converts it into
tokens. A token is the smallest unit of a program that has meaning, such as keywords, identifiers,
operators, constants, and symbols.
The Lexical Analyzer works as a scanner that reads the input character stream and groups them into
meaningful sequences, which are then passed to the Syntax Analyzer (Parser) for further processing

Code :
Output :

Conclusion
By converting a regular expression into an NFA with null-moves (ε-moves), we effectively represent its
transitions and operations like concatenation, alternation, and repetition. This forms the foundation for
lexical analysis and pattern matching, enabling efficient string processing in compilers and search
algorithms.
Experiment No. 3

Aim : To implement various LEX Program.

Introduction to LEX

LEX is a lexical analyzer generator that is used to create scanners or tokenizers for processing structured
text. It is widely used in compiler construction for tokenizing input based on predefined patterns.

Purpose of LEX Programs


LEX programs are designed to scan and recognize patterns in input text and classify them into tokens based
on regular expressions. These programs can be used for:
1. Counting words, lines, and characters in a file.
2. Identifying keywords and identifiers in programming languages.
3. Removing comments from source code.

Working of a LEX Program


A LEX program consists of three sections:
1. Definition Section (%{ ... %})
o Contains C code and header files (#include <stdio.h>).
2. Rules Section (%% ... %%)
o Contains pattern-action rules written using regular expressions.
o If a pattern matches input text, the corresponding C code is executed.
3. User Code Section
o Defines the main() function to run the lexer.
Phases of Execution
1. Lexical Analysis: The input text is scanned, and patterns are identified.
2. Tokenization: The input is divided into tokens based on matching rules.
3. Processing: The defined actions (like printing tokens, counting, or filtering) are performed.
4. Output Generation: The processed data is displayed or written to a file.
Implementation of Various LEX Programs
Here are three commonly implemented LEX programs:

1) Counting Words, Lines, and Characters


• Reads input text and counts the number of words, lines, and characters.
• Uses regular expressions to identify spaces, words, and newlines.

2) Recognizing Keywords and Identifiers


• Identifies programming language keywords and identifiers.
• Differentiates reserved words from user-defined variables.
3) Removing Comments from a C Program
• Filters out single-line (//) and multi-line (/*...*/) comments.
• Outputs a clean code file without comments.

Advantages of LEX
• Automates lexical analysis in compiler design.
• Uses regular expressions for efficient pattern matching.
• Generates optimized C code, which can be compiled with GCC.
• Helps in preprocessing, filtering, and data extraction.
Conclusion
LEX programs play a crucial role in text processing and compiler construction. By defining tokenizing
rules, they help automate text analysis, syntax highlighting, and input filtering in programming
languages and tools.
Experiment No. 4

Aim : To implement a recursive descend parser.

Introduction

Parsing is a fundamental process in compiler design and language processing, converting source code into a
structured representation for analysis. One of the simplest parsing techniques is Recursive Descent Parsing,
which consists of a set of recursive functions for each non-terminal in the grammar.
Recursive Descent Parsing is a top-down parsing method where each function attempts to match input
tokens with a production rule in a recursive manner. It is commonly used for simple and unambiguous
grammars such as LL(1) grammars, where parsing decisions are made using only one lookahead token.

Theory
Recursive Descent Parsing operates by implementing a function for each non-terminal in a given grammar.
The parser follows these principles:
1. Top-down parsing: It starts from the start symbol and recursively expands non-terminals using
production rules.
2. One-token lookahead: It makes decisions based on the next available token in the input.
3. Backtracking avoidance: LL(1) grammars ensure that parsing choices are deterministic without
needing to backtrack.

Algorithm
1. Define a function for each non-terminal in the grammar.
2. Read the input token and match it with expected grammar rules.
3. Recursively call functions for corresponding non-terminals.
4. If an input mismatch occurs, return an error.
5. If the input is parsed successfully, return a valid parse tree.
Code :

#include <stdio.h> int E(){

#include <string.h> printf("%-16s E -> T E'\n", cursor);

#define SUCCESS 1 if (T()) {

#define FAILED 0 if (Edash()) {

return SUCCESS;}

// E -> T E' else{

// E' -> + T E' | ε return FAILED; }

// T -> F T' }

// T' -> * F T' | ε else {

// F -> ( E ) | i return FAILED;}

int E(), Edash(), T(), Tdash(), F();

const char *cursor; int Edash(){

char string[64]; if (*cursor == '+')

int main() printf("%-16s E' -> + T E'\n", cursor);

{ cursor++;

puts("Enter the string"); if (T())

scanf("%s", string); {

cursor = string; if (Edash()){

puts(""); return SUCCESS;

puts("Input Action"); }

puts("--------------------------------"); else{

if (E() && *cursor == '\0') return FAILED;

{ }

puts("--------------------------------"); }

puts("String is successfully parsed"); else{

return 0; return FAILED;

} }

else }

{ else{

puts("--------------------------------"); printf("%-16s E' -> $\n", cursor);

puts("Error in parsing String"); return SUCCESS;

return 1; }

} }

}
int T(){ int F(){

printf("%-16s T -> F T'\n", cursor); if (*cursor == '(') {

if (F()){ printf("%-16s F -> ( E )\n", cursor);

if (Tdash()){ cursor++;

return SUCCESS; if (E()) {

} if (*cursor == ')'){

else{ cursor++;

return FAILED; return SUCCESS;

} }

} else{

else{ return FAILED;

return FAILED; }

} }

} else{

return FAILED;

int Tdash(){ }

if (*cursor == '*'){ }

printf("%-16s T' -> * F T'\n", cursor); else if (*cursor == 'i'){

cursor++; printf("%-16s F -> i\n", cursor);

if (F()) { cursor++;

if (Tdash()) { return SUCCESS;

return SUCCESS; }

} else {

else{ return FAILED;

return FAILED; }

} }

else{

return FAILED;

else {

printf("%-16s T' -> $\n", cursor);

return SUCCESS;

}
Output :

Conclusion
Recursive Descent Parsing is a simple and effective method for parsing deterministic, context-free grammars
such as LL(1) grammars. This implementation demonstrates parsing arithmetic expressions using recursive
function calls while maintaining operator precedence. Although it is limited in handling ambiguous or left-
recursive grammars, it is widely used in lightweight language parsers and compilers.
Further enhancements can include:
• Support for more operators and complex expressions.
• Implementing a symbol table for variable recognition.
• Improving efficiency by eliminating unnecessary recursion.
Experiment No. 5

Aim : To implement first, follow and LL1 Parser.

Introduction:
In compiler design, syntax analysis is a crucial phase where a parser verifies the syntactical structure of the
source code. For top-down parsing, FIRST and FOLLOW sets are foundational concepts that help in
constructing parsing tables and developing LL(1) parsers.

Objective:
To implement:
1. FIRST set for a given grammar
2. FOLLOW set for a given grammar
3. LL(1) Parsing Table and simulate LL(1) parsing

Theory:

FIRST Set:
The FIRST set of a symbol is the set of terminals that begin the strings derivable from that symbol.
Rules:
• If X → aα, then a ∈ FIRST(X)
• If X → ε, then ε ∈ FIRST(X)
• If X → Y₁Y₂…Yn, and ε ∈ FIRST(Y₁), …, FIRST(Yk), then add FIRST(Yk+1) (excluding ε) to
FIRST(X)
• If all Yi can derive ε, then ε ∈ FIRST(X)
FOLLOW Set:
The FOLLOW set of a non-terminal A is the set of terminals that can appear immediately to the right of A in
some sentential form.
Rules:
• Place $ in FOLLOW(S) where S is the start symbol
• If A → αBβ, then everything in FIRST(β) (except ε) is in FOLLOW(B)
• If A → αB or A → αBβ where FIRST(β) contains ε, then everything in FOLLOW(A) is in
FOLLOW(B)
LL(1) Parsing Table:
An LL(1) parser uses a parsing table to decide which production to use, based on the current input symbol
and top of the stack.
• For each production A → α:
o For each terminal a in FIRST(α), add A → α to table[A][a]
o If ε ∈ FIRST(α), then for each b in FOLLOW(A), add A → α to table[A][b]

Algorithm:
FIRST and FOLLOW:
1. Input the grammar and initialize FIRST and FOLLOW sets as empty.
2. Recursively calculate FIRST sets for each non-terminal.
3. After FIRST is complete, compute FOLLOW sets using rules until no more additions occur.
LL(1) Parsing:
1. Use FIRST and FOLLOW sets to construct the parsing table.
2. Simulate parsing with a stack:
o Initialize stack with $ and start symbol.
o At each step, check the top of stack and current input symbol.
o Use the parsing table to apply a production, or report error.

Code:
#include <iostream>
#include <map>
#include <set>
#include <vector>
#include <sstream>
using namespace std;

map<char, vector<string>> grammar;


map<char, set<char>> firstSet, followSet;
map<pair<char, char>, string> parsingTable;
set<char> terminals, nonTerminals;
char startSymbol;

bool isTerminal(char c) {
return !(c >= 'A' && c <= 'Z') || c == '#';
}
void computeFirst(char symbol) { for (auto& rule : grammar) {
if (firstSet.find(symbol) != firstSet.end()) char lhs = rule.first;
return; for (string prod : rule.second) {
for (int i = 0; i < prod.length(); i++) {
set<char> result; if (prod[i] == symbol) {
for (string prod : grammar[symbol]) { bool isLast = (i == prod.length() - 1);
if (prod == "#") { if (!isLast) {
result.insert('#'); char next = prod[i + 1];
continue; if (isTerminal(next)) {
} follow.insert(next);
} else {
for (char ch : prod) { follow.insert(firstSet[next].begin(),
firstSet[next].end());
if (isTerminal(ch)) {
follow.erase('#');
result.insert(ch);
if (firstSet[next].find('#') !=
break; firstSet[next].end()) {
} else { computeFollow(lhs);
computeFirst(ch); follow.insert(followSet[lhs].begin(),
result.insert(firstSet[ch].begin(), firstSet[ch].end()); followSet[lhs].end());

if (firstSet[ch].find('#') == firstSet[ch].end()) }

break; }

else } else {

result.erase('#'); if (lhs != symbol) {

} computeFollow(lhs);

} follow.insert(followSet[lhs].begin(),
followSet[lhs].end());
}
}
firstSet[symbol] = result;
}
}
}
}
void computeFollow(char symbol) {
}
if (followSet.find(symbol) != followSet.end() && symbol !=
startSymbol) }

return; }

set<char>& follow = followSet[symbol];


if (symbol == startSymbol)
follow.insert('$');
for (auto& rule : grammar) { void createParsingTable() {
char lhs = rule.first; for (auto& rule : grammar) {
for (string prod : rule.second) { char lhs = rule.first;
for (int i = 0; i < prod.length(); i++) { for (string prod : rule.second) {
if (prod[i] == symbol) { set<char> firstProd;
bool isLast = (i == prod.length() - 1); if (prod == "#") {
if (!isLast) { firstProd = followSet[lhs];
char next = prod[i + 1]; } else {
if (isTerminal(next)) { bool addFollow = true;
follow.insert(next); for (char ch : prod) {
} else { if (isTerminal(ch)) {
follow.insert(firstSet[next].begin(), firstProd.insert(ch);
firstSet[next].end());
addFollow = false;
follow.erase('#');
break;
if (firstSet[next].find('#') !=
firstSet[next].end()) { } else {

computeFollow(lhs); firstProd.insert(firstSet[ch].begin(),
firstSet[ch].end());
follow.insert(followSet[lhs].begin(),
followSet[lhs].end()); if (firstSet[ch].find('#') == firstSet[ch].end())
{
}
addFollow = false;
}
break;
} else {
} else {
if (lhs != symbol) {
firstProd.erase('#');
computeFollow(lhs);
}
follow.insert(followSet[lhs].begin(),
followSet[lhs].end()); }

} }

} if (addFollow) {

} firstProd.insert(followSet[lhs].begin(),
followSet[lhs].end());
}
}
}
}
}
}
for (char t : firstProd) {
parsingTable[{lhs, t}] = prod;
}
}
}
}
void parseInput(string input) { int main() {
input += '$'; int n;
vector<char> stack = {'$', startSymbol}; cout << "Enter number of productions: ";
int i = 0; cin >> n;
cout << "\nParsing Steps:\n"; cout << "Enter grammar (Format: A->xyz | A-># for
epsilon):\n";
cout << "Stack\t\tInput\t\tAction\n";
while (!stack.empty()) {
for (int i = 0; i < n; i++) {
string stackStr(stack.begin(), stack.end());
string line;
cout << stackStr << "\t\t" << input.substr(i) << "\t\t";
cin >> line;
char top = stack.back();
char lhs = line[0];
char curr = input[i];
if (i == 0) startSymbol = lhs;

if (top == curr && top == '$') {


size_t pos = line.find("->");
cout << "Accepted\n";
string rhs = line.substr(pos + 2);
return;
stringstream ss(rhs);
}
string token;
if (top == curr) {
while (getline(ss, token, '|')) {
cout << "Match " << top << "\n";
grammar[lhs].push_back(token);
stack.pop_back();
for (char c : token) {
i++;
if (isTerminal(c) && c != '#') terminals.insert(c);
} else if (isTerminal(top)) {
else if (!isTerminal(c)) nonTerminals.insert(c);
cout << "Error\n";
}
return;
}
} else {
nonTerminals.insert(lhs);
string prod = parsingTable[{top, curr}];
}
if (prod == "") {
cout << "Error\n";
for (char nt : nonTerminals)
return;
computeFirst(nt);
}
cout << top << " → " << prod << "\n";
for (char nt : nonTerminals)
stack.pop_back();
computeFollow(nt);
if (prod != "#") {
for (int j = prod.size() - 1; j >= 0; j--)
cout << "\nFIRST Sets:\n";
stack.push_back(prod[j]);
for (auto& p : firstSet) {
}
cout << "FIRST(" << p.first << ") = { ";
}
for (char c : p.second) cout << c << " ";
}
cout << "}\n";
}
}
cout << "\nFOLLOW Sets:\n";
for (auto& p : followSet) {
cout << "FOLLOW(" << p.first << ") = { ";
for (char c : p.second) cout << c << " ";
cout << "}\n";
}
createParsingTable();
cout << "\nLL(1) Parsing Table:\n";
for (auto& p : parsingTable) {
cout << p.first.first << ", " << p.first.second << " => " << p.second << "\n";
}
string input;
cout << "\nEnter string to parse (e.g., idid): ";
cin >> input;
parseInput(input);
return 0;
}

Output :

Conclusion :
This practical demonstrates how FIRST and FOLLOW sets guide the construction of LL(1) parsing tables.
LL(1) parsers are simple and efficient for grammars that satisfy the LL(1) condition (no ambiguity, left
recursion).
Experiment No. 6

Aim : To implement Bottom Up Parser.


Introduction:
In syntax analysis, parsing techniques are classified into top-down and bottom-up parsers. One of the
commonly used bottom-up parsing techniques is the Operator Precedence Parser. This parser is
particularly useful for arithmetic expressions and relies on precedence and associativity of operators to parse
expressions without backtracking.
Objective:
To implement a Bottom-Up Operator Precedence Parser for parsing arithmetic expressions using
precedence relations among operators.

Theory:
• Bottom-Up Parsing is also called shift-reduce parsing. It attempts to construct a parse tree for an
input string starting from the leaves (input) and working up to the root (start symbol).
• In Operator Precedence Parsing, parsing decisions are made by comparing precedence relations
between terminals using a precedence table.
• The parser uses three precedence relations:
o E1 < E2 (Shift): Push E2 to stack.
o E1 > E2 (Reduce): Apply production to reduce.
o E1 = E2 (Match): Used mainly for parentheses.
• Grammar used:
E → E + E | E * E | (E) | id
• We use a precedence table that defines the relations between +, *, id, (, ), and $.

Algorithm:
1. Input the expression ending with $.
2. Replace all id with i (for simplification).
3. Initialize a stack with $.
4. Loop through the input:
o Compare the top of stack with current input symbol using precedence table.
o If relation is < or =, perform Shift.
o If relation is >, perform Reduce.
o If input is fully consumed and stack has only start symbol, Accept.
5. Report errors for invalid sequences.
Code:
#include <iostream> while (true) {
using namespace std; // Display stack
// Operator precedence table stack<char> temp = st;
map<char, map<char, char>> precedence = { string stack_content, input_content = input.substr(i);
{'+', {{'+', '>'}, {'*', '<'}, {'i', '<'}, {'(', '<'}, {')', '>'}, {'$', while (!temp.empty()) {
'>'}}},
stack_content = temp.top() + stack_content;
{'*', {{'+', '>'}, {'*', '>'}, {'i', '<'}, {'(', '<'}, {')', '>'}, {'$',
'>'}}}, temp.pop();

{'i', {{'+', '>'}, {'*', '>'}, {')', '>'}, {'$', '>'}}}, }

{'(', {{'+', '<'}, {'*', '<'}, {'i', '<'}, {'(', '<'}, {')', '='}}},
{')', {{'+', '>'}, {'*', '>'}, {')', '>'}, {'$', '>'}}}, cout << stack_content << "\t" << input_content << "\t";

{'$', {{'+', '<'}, {'*', '<'}, {'i', '<'}, {'(', '<'}}}};


bool is_operator(char c) { char top = top_terminal(st);

return c == '+' || c == '*' || c == '(' || c == ')' || c == 'i' || c char action = precedence[top][a];


== '$';
} if (action == '<' || action == '=') {
char top_terminal(stack<char>& st) { cout << "Shift\n";
stack<char> temp = st; st.push(a);
while (!temp.empty()) { i++;
if (is_operator(temp.top())) a = input[i];
return temp.top(); } else if (action == '>') {
temp.pop(); cout << "Reduce\n";
} char popped = st.top(); st.pop();
return '$';
} if (popped == 'i') {
int main() { st.push('E');
string input; } else if (popped == ')') {
cout << "Enter the input expression ending with $ (use 'id' // Reduce (E)
for identifiers): ";
if (!st.empty() && st.top() == 'E') {
cin >> input;
for (size_t pos = 0; (pos = input.find("id", pos)) != st.pop();
string::npos; ) {
if (!st.empty() && st.top() == '(') {
input.replace(pos, 2, "i");
st.pop();
}
st.push('E');
stack<char> st;
} else {
st.push('$');
cout << "Error in reduction (missing '(')\n";
int i = 0;
return 1;
char a = input[i];
}
cout << "\nStack\tInput\tAction\n";
}
else {
cout << "Error in reduction (missing E)\n";
return 1;
}
} else if (popped == 'E') {
if (!st.empty()) {
char op = st.top(); st.pop();
if (!st.empty() && st.top() == 'E') {
st.pop();
st.push('E');
} else {
cout << "Error in reduction\n";
return 1; } } }
} else if (a == '$' && st.top() == 'E' && st.size() == 2 && st.top() == 'E') {
cout << "Accept\n";
break;
} else if (action == 'a') {
cout << "Accept\n";
break;
} else {
cout << "Error\n";
break; } }
return 0;
}

Output :

Conclusion : The Operator Precedence Parser successfully parses arithmetic expressions using shift-reduce
operations guided by a precedence table. It is efficient for expressions with operators and handles ambiguity
using defined precedences.
Experiment No. 7

Aim : To implement LR Parser.


Introduction:
An LR parser is a bottom-up parser for context-free grammars. It reads input from left to right and produces
a rightmost derivation in reverse. LR parsers are powerful and can handle a large class of grammars. They
are widely used in compiler design due to their ability to detect syntax errors as soon as possible.
In this experiment, we implement a simple LR parser using hardcoded ACTION and GOTO tables for a
simple grammar:
S →AB
A→a
B→b
Algorithm:
1. Initialize the parsing stack with state 0.
2. Append $ to the end of the input string to denote the end of input.
3. Use the top of the stack and the current input symbol to consult the ACTION table:
o If the action is Shift, push the next state to the stack.
o If the action is Reduce, pop states and symbols from the stack as per the production and push
the LHS non-terminal.
o If the action is Accept, report success.
o If no action is found, report error.
4. Continue until the input is accepted or an error is encountered.
Code:
#include <iostream>
using namespace std;

// ACTION and GOTO tables


map<int, map<char, string>> ACTION;
map<int, map<int, int>> GOTO;

void buildParsingTable() {
ACTION[0]['a'] = "S2";
ACTION[2]['b'] = "S4";
ACTION[4]['$'] = "Reduce B -> b";
ACTION[0]['A'] = "G1";
ACTION[1]['$'] = "Accept";
ACTION[2]['B'] = "G1";
ACTION[3]['$'] = "Reduce A -> a";

GOTO[0][1] = 3; // A
GOTO[2][2] = 1; // B
}
void parseInput(string input) {
stack<int> stateStack;
stack<char> symbolStack;
stateStack.push(0);
input += '$';
int i = 0;
while (true) {
int state = stateStack.top();
char curr = input[i];

if (ACTION[state].find(curr) == ACTION[state].end()) {
cout << "Error: No action for state " << state << " and symbol " << curr << endl;
break;
}
string action = ACTION[state][curr];
cout << "State: " << state << ", Input: " << input.substr(i) << ", Action: " << action << endl;

if (action[0] == 'S') {
int next = action[1] - '0';
stateStack.push(next);
symbolStack.push(curr);
i++;
} else if (action.substr(0, 6) == "Reduce") {
string prod = action.substr(7);
char lhs = prod[0];
int popCount = 1;

for (int j = 0; j < popCount; j++) {


stateStack.pop();
symbolStack.pop();
}
int topState = stateStack.top();
int gotoState = GOTO[topState][lhs - 'A'];
stateStack.push(gotoState);
symbolStack.push(lhs);

cout << "Reducing by: " << prod << endl;


} else if (action == "Accept") {
cout << "Input accepted!" << endl;
break;
} else {
cout << "Parsing Error!" << endl;
break; } }
}
int main() {
buildParsingTable();
string input;
cout << "Enter input string to parse (use 'a' and 'b' as terminals): ";
cin >> input;
parseInput(input);
return 0;
}

Output :

Conclusion : In this experiment, we successfully implemented an LR parser in C++ for a simplified grammar. The parser
correctly processes input strings by shifting and reducing according to ACTION and GOTO tables. LR parsing is efficient and
suitable for real-world programming language syntax analysis.
Experiment No. 8

Aim : To implement syntax directed translation.


Introduction:
Syntax Directed Translation (SDT) uses grammar rules augmented with semantic actions. These actions can compute values,
generate intermediate code, or perform checks. SDTs are essential in compilers for converting high-level language constructs into
low-level representations.

For this experiment, we’ll evaluate arithmetic expressions (like 3+5*2) using syntax-directed definitions during parsing. The
translation is done alongside parsing using stack-based evaluation.

We use a simplified grammar:

E→E+T
E→T
T→T*F
T→F
F → (E)
F → num

Algorithm:
1. Read the input expression.
2. Convert the expression into postfix (or parse using stack with translation rules).
3. Use a stack to simulate semantic actions (e.g., evaluating operations).
4. When reducing by a rule, apply the semantic action:
o For example, E → E + T: compute E.val = E1.val + T.val.
5. Continue parsing until the entire expression is evaluated.

Code:
#include <iostream>
using namespace std;
int precedence(char op) {
if(op == '+' || op == '-') return 1;
if(op == '*' || op == '/') return 2;
return 0;
}
int applyOp(int a, int b, char op) {
switch(op) {
case '+': return a + b;
case '-': return a - b;
case '*': return a * b;
case '/': return b ? a / b : throw runtime_error("Division by zero"); }
return 0; }
// Evaluate expression using SDT logic else {
int evaluate(string expr) { while(!ops.empty() && precedence(ops.top()) >=
precedence(expr[i])) {
stack<int> values;
int val2 = values.top(); values.pop();
stack<char> ops;
int val1 = values.top(); values.pop();
char op = ops.top(); ops.pop();
for(size_t i = 0; i < expr.length(); i++) {
values.push(applyOp(val1, val2, op));
if(expr[i] == ' ') continue;
}
ops.push(expr[i]);
// If number, push to value stack
}
if(isdigit(expr[i])) {
}
int val = 0;
while(!ops.empty()) {
while(i < expr.length() && isdigit(expr[i])) {
int val2 = values.top(); values.pop();
val = (val*10) + (expr[i]-'0');
int val1 = values.top(); values.pop();
i++;
char op = ops.top(); ops.pop();
}
values.push(applyOp(val1, val2, op));
values.push(val);
}
i--;
return values.top();
}
}
// If '(', push to ops stack
int main() {
else if(expr[i] == '(') {
string expression;
ops.push(expr[i]);
cout << "Enter arithmetic expression: ";
}
getline(cin, expression);
// If ')', solve until '('
else if(expr[i] == ')') {
try {
while(!ops.empty() && ops.top() != '(') {
int result = evaluate(expression);
int val2 = values.top(); values.pop();
cout << "Result: " << result << endl;
int val1 = values.top(); values.pop();
} catch(const exception& e) {
char op = ops.top(); ops.pop();
cerr << "Error: " << e.what() << endl;
values.push(applyOp(val1, val2, op));
}
}
return 0;
ops.pop(); // pop '('
}
}

Output :

Conclusion:
In this experiment, we successfully implemented syntax-directed translation for arithmetic expressions. The program evaluates the
expression by applying semantic rules during parsing. This technique is essential in compilers for generating intermediate code
and performing computations during syntax analysis.

You might also like