0% found this document useful (0 votes)
27 views34 pages

True/False Questions: (Mark As True (T) or False (F) and Correct The False Statements.)

Uploaded by

Destiny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views34 pages

True/False Questions: (Mark As True (T) or False (F) and Correct The False Statements.)

Uploaded by

Destiny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

CHAPTER 1

True/False Questions
(Mark as True (T) or False (F) and correct the false statements.)

1. T/F: The von Neumann architecture is the basis for imperative


languages.
o Answer: T
2. T/F: Pure interpretation is faster than compilation because it skips
translation.
o Answer: F (Pure interpretation is slower; compilation
translates to machine code for faster execution.)
3. T/F: Readability and writability are the same criteria for language
evaluation.
o Answer: F (Readability focuses on ease of understanding;
writability focuses on ease of writing programs.)
4. T/F: Java uses a hybrid implementation method (e.g., bytecode +
JIT).
o Answer: T
5. T/F: COBOL is a functional programming language.
o Answer: F (COBOL is imperative, used for business
applications.)

6. T/F: The fetch-execute cycle is a fundamental process in von


Neumann architecture.
o Answer: T
7. T/F: Scripting languages like Python are typically compiled for faster
execution.
o Answer: F (They are usually interpreted or hybrid, not purely
compiled.)
8. T/F: Orthogonality in language design means a small set of
constructs can be combined in many ways.
o Answer: T
9. T/F: Prolog is an example of a functional programming language.
o Answer: F (Prolog is a logic programming language.)
10. T/F: The cost of a programming language includes training
programmers and maintaining programs.
o Answer: T

11. T/F: Fortran was designed specifically for business


applications.
 Answer: F (Fortran was designed for scientific computing; COBOL
was for business applications.)
12. T/F: JavaScript is a purely compiled language.
 Answer: F (JavaScript is typically interpreted or JIT-compiled, not
purely compiled.)
13. T/F: Orthogonality reduces the number of language constructs
needed.
 Answer: T
14. T/F: The cost of a language includes only the time taken to
write programs.
 Answer: F (It also includes training, compiling, executing, and
maintaining programs.)
15. T/F: Logic programming languages like Prolog use rules
specified in a fixed order.
 Answer: F (Rules in Prolog can be specified in any order.)

Fill-in-the-Blank Questions

1. The __________ bottleneck refers to the speed limitation caused by


the connection between memory and CPU in von Neumann
architecture.
o Answer: von Neumann
2. __________ languages manipulate symbols rather than numbers,
making them suitable for AI (e.g., LISP).
o Answer: Functional
3. The __________ phase of compilation converts characters into lexical
units.
o Answer: lexical analysis
4. __________ is a language evaluation criterion that measures how well
a program conforms to its specifications.
o Answer: Reliability
5. An example of a markup/programming hybrid language is __________.
o Answer: XSLT (or JSTL)

6. The __________ architecture stores both instructions and data in the


same memory.
o Answer: von Neumann
7. __________ is a language evaluation criterion that measures the ease
of moving programs between systems.
o Answer: Portability
8. In __________ implementation, programs are translated to an
intermediate language and then interpreted.
o Answer: hybrid
9. The __________ phase of compilation transforms lexical units into
parse trees.
o Answer: syntax analysis
10. __________ is a trade-off where powerful operators (e.g., in APL)
may reduce readability.
o Answer: Writability vs. readability
11. The __________ programming paradigm focuses on applying
functions to parameters (e.g., Haskell).
 Answer: functional
12. In language evaluation, __________ refers to how well a
language supports ignoring details when working with complex
structures.
 Answer: abstraction
13. The __________ phase of compilation checks for semantic errors
(e.g., type mismatches).
 Answer: semantic analysis
14. The __________ trade-off occurs when a language's flexibility
(e.g., C++ pointers) reduces reliability.
 Answer: writability vs. reliability
15. A __________ is a program that processes macros before
compilation (e.g., #include in C).
 Answer: preprocessor

Short Answer/Essay Questions

1. Compare compilation and pure interpretation, highlighting one


advantage and disadvantage of each.
o Answer:
 Compilation:
 Advantage: Faster execution (translated to
machine code).
 Disadvantage: Slower translation phase; less
portable.
 Pure Interpretation:
 Advantage: Easier debugging (immediate error
feedback).
 Disadvantage: Slower execution (10–100x slower
than compiled code).
2. Explain two influences on programming language design,
providing examples.
o Answer:
 Computer Architecture: Von Neumann architecture
led to imperative languages (e.g., C) with
variables/assignment mimicking memory/CPU
operations.
 Software Methodologies: Object-oriented
programming (e.g., Java) emerged from data abstraction
and encapsulation needs.
3. Why is operator overloading a readability concern?
o Answer: It can cause confusion if operators behave differently
in various contexts (e.g., + for addition vs. string
concatenation), making code harder to understand.
4. Describe the trade-off between reliability and execution
cost using an example from the PDF.
o Answer: Java enforces array index checks for reliability
(preventing errors), but this increases execution cost due to
runtime checks.

5. Name three programming domains and their associated


languages.
o Answer:
 Scientific: Fortran (floating-point/arrays).
 Systems: C (efficiency).
 Web: JavaScript/HTML (scripting/markup).
6. Explain the role of exception handling in language reliability.
o Answer: Exception handling intercepts runtime errors (e.g.,
division by zero) and allows corrective actions, preventing
crashes and improving program robustness.
7. Compare imperative and functional languages, giving one
example of each.
o Answer:
 Imperative: Focuses on variables/assignment (e.g., C).
 Functional: Focuses on function application (e.g.,
Haskell).
8. Why is the von Neumann bottleneck significant in language
design?
o Answer: It limits speed due to memory-CPU data transfer,
encouraging efficient language constructs (e.g., loops, arrays)
to minimize bottlenecks.
9. Describe how top-down design influenced language
evolution.
o Answer: It emphasized structured programming (e.g., Pascal),
leading to better control structures (e.g., if-else, while) for
readability.
10. What is the purpose of a preprocessor in C? Give an
example directive.
o Answer: It processes macros before compilation
(e.g., #include <stdio.h> inserts library code).
11. Explain why array bounds checking improves reliability
but increases execution cost.
 Answer: Bounds checking prevents memory access violations
(improving reliability) but requires extra runtime checks (increasing
cost).
12. Compare static and dynamic type checking, giving an
example language for each.
 Answer:
o Static: Types checked at compile-time (e.g., Java).
o Dynamic: Types checked at runtime (e.g., Python).

13. Why did object-oriented languages (e.g., Java) emerge


in the 1980s?
 Answer: They combined data abstraction, inheritance, and
polymorphism to manage growing software complexity.
14. Describe one advantage of hybrid implementation (e.g.,
Java bytecode) over pure interpretation.
 Answer: Hybrid systems balance speed (partial compilation) and
portability (intermediate code).
15. How does operator overloading affect writability and
readability?
 Answer: It improves writability (concise code) but may reduce
readability if overused (unclear operator meanings).
Bonus: Mixed-Up Terms
1. Language Categories: Imperative, Functional, Markup, Logic.
o Correction: Markup → Logic (Markup hybrids are a separate
category).
2. Compilation Phases: Lexical analysis, Optimization, Syntax
analysis, Code generation.
o Correction: Move Optimization after intermediate code
generation.

3. Language Evaluation Criteria: Readability, Generality, Writability,


Reliability.
o Correction: Generality → Cost (Generality is an additional
criterion, not core).
4. Implementation Methods: Compilation, JIT, Pure interpretation,
Hybrid.
o Correction: JIT is a subset of hybrid/delayed compilation.

5. Programming Domains: Scientific, Business, Systems, Web.


o Correction: All are correct; no fix needed.
6. Language Categories: Imperative, Markup, Functional, Logic.
o Correction: Markup → Markup/Programming Hybrid (e.g.,
XSLT).

Classic Confusion Pairs


(Explain the difference between the following.)
1. Lexical vs. Syntax Analysis:
o Lexical: Converts characters to tokens (e.g., int
x → [KEYWORD:int, ID:x]).
o Syntax: Checks token structure against grammar rules (e.g., if
(x) { } validity).
2. Type Checking vs. Exception Handling:
o Type Checking: Catches errors at compile-time (e.g., int x =
"hello";).
o Exception Handling: Catches runtime errors (e.g., file not
found).

3. Compilation vs. JIT Compilation:


o Compilation: Entire program translated to machine code
before execution.
o JIT: Code compiled to machine language during runtime (e.g.,
Java bytecode).
4. Syntax vs. Semantics:
o Syntax: Rules for valid program structure (e.g., if (x) { }).
o Semantics: Meaning of valid constructs (e.g., if executes code
conditionally).

Tricky Terminology
(Define these often-confused terms.)
1. Aliasing: Multiple references to the same memory location (e.g.,
two pointers to one variable).
2. Well-definedness: Completeness/precision of a language’s official
specification.

CHAPTER 3
True/False Questions
1. Syntax refers to the meaning of expressions and statements
in a programming language.
o False (Syntax refers to the form/structure, semantics refers to
meaning)
2. A lexeme is a category of tokens in a language.
o False (A token is a category of lexemes)
3. BNF was developed by Noam Chomsky to describe natural
languages.
o False (BNF was developed by John Backus for Algol58;
Chomsky developed context-free grammars)
4. A grammar is ambiguous if it generates a sentential form
with two or more distinct parse trees.
o True
5. Attribute grammars can only use synthesized attributes, not
inherited attributes.
o False (They can use both types of attributes)

6. A token is the smallest meaningful unit in a programming


language's syntax.
o True (Tokens represent categories like keywords, operators, or
identifiers.)
7. A leftmost derivation always expands the rightmost
nonterminal first.
o False (Leftmost derivation expands the leftmost nonterminal
first.)
8. EBNF uses braces {} to indicate optional parts of a grammar
rule.
o False (Braces {} indicate repetition; brackets [] indicate
optional parts.)
9. Static semantics can be fully described using only context-
free grammars (CFGs).
o False (CFGs cannot enforce rules like "variable declaration
before use.")
10. An unambiguous grammar ensures that every valid
program has only one parse tree.
 True
11. In BNF, the start symbol must appear on the left-hand
side of only one production rule.
o False (The start symbol can appear in multiple rules, but it
must be uniquely defined as the starting point.)
12. All context-free languages can be parsed in linear time
using a deterministic parser.
o False (Only certain subsets like LR(1) grammars can be parsed
deterministically in linear time.)
13. Synthesized attributes in attribute grammars are
evaluated in a top-down pass of the parse tree.
o False (Synthesized attributes are computed bottom-up;
inherited attributes are top-down.)
14. EBNF's { } notation is strictly equivalent to recursion in
standard BNF.
o True (e.g., <list> → item { , item } vs. <list> → item | item ,
<list>.)
15. Static semantics include rules like "a variable must be
declared before use," which can be enforced by CFGs.
o False (CFGs cannot enforce context-sensitive rules; attribute
grammars or semantic checks are needed.)

Fill-in-the-Blank Questions
1. A ______ is a string of characters over some alphabet, while a ______
is a set of these.
o sentence, language
2. In BNF, abstractions that represent classes of syntactic structures
are called ______ symbols.
o nonterminal
3. The ______ symbol is a special element of the nonterminals in a
grammar from which all derivations begin.
o start
4. ______ is a method to describe optional parts in Extended BNF by
placing them in square brackets.
o [ ] (brackets)
5. ______ semantics are aspects of syntax that cannot be described by
context-free grammars, such as variable declaration before use.
o Static
6. In parsing, a ______ is a sequence of rule applications that starts with
the start symbol and ends with a sentence.
o derivation
7. A ______ is a graphical representation of a derivation, showing the
hierarchical structure of the grammar rules.
o parse tree
8. The notation <expr> → <term> { (+ | -) <term> } in EBNF indicates
that an expression can have one or more ______.
o additive terms (repetition of + or - operations)
9. In attribute grammars, a ______ attribute is computed from child
nodes up to the parent, while a ______ attribute is passed down from
parent to children.
o synthesized, inherited
10. The rule <if_stmt> → if <logic_expr> then <stmt> in BNF
describes the syntax of a(n) ______ statement.
 conditional (if-then)
11. A ______ derivation expands the rightmost nonterminal first,
while a ______ derivation expands the leftmost nonterminal first.
o rightmost, leftmost
12. In EBNF, the notation (a | b) is equivalent to BNF's ______.
o alternative productions (e.g., <X> → a | b)
13. The ______ problem occurs when a grammar allows multiple
parse trees for the same sentence.
o ambiguity
14. An attribute grammar's ______ are boolean expressions that
must evaluate to true for a valid program.
o predicates
15. The rule <stmt> → if <expr> then <stmt> [else <stmt>] uses
EBNF's ______ notation for optional parts.
o square brackets [ ]
Definition/Explanation Questions
1. What is the difference between syntax and semantics in
programming languages?
o Syntax refers to the form or structure of expressions,
statements, and program units (how code is written).
Semantics refers to the meaning of these elements (what the
code does). Together they define a language.
2. Explain the purpose of attribute grammars and how they
extend context-free grammars.
o Attribute grammars add semantic information to parse tree
nodes in CFGs. They include attributes (values), functions to
compute attributes, and predicates to check consistency. They
are used for static semantics specification and compiler
design.
3. Why might a grammar be ambiguous? Provide an example.
o A grammar is ambiguous if it can generate a sentence with
multiple parse trees, often due to unclear operator
precedence/associativity.
Example: <expr> → <expr> - <expr> | const allows "const -
const - const" to be parsed in two ways.
4. Compare BNF and EBNF, highlighting key differences.
o *BNF uses basic rules with "→" and "|" for alternatives. EBNF
adds:
 for optional elements
 { } for repetition (0 or more)
 ( ) with | for grouped alternatives
EBNF is more concise and readable for complex
grammars.*
5. What problem do attribute grammars solve that context-free
grammars cannot handle?
o CFGs cannot handle context-sensitive requirements like type
checking or "declare before use". Attribute grammars add
attributes (e.g., variable types) and predicates to enforce
these rules during parsing.

6. What is the difference between a lexeme and a token? Give


an example.
o A lexeme is the smallest textual unit (e.g., "count", "+", "42"),
while a token is its category
(e.g., identifier, plus_op, int_literal). Example: In x = 5;, "x" is
a lexeme, and identifier is its token.

7. Explain why operator precedence is important in grammars,


and how it can be enforced.
o Operator precedence ensures expressions like a + b * c are
parsed correctly (as a + (b * c)). It can be enforced by
structuring grammar rules hierarchically (e.g., expr → term +
term, term → factor * factor).
8. What is a sentential form? How does it differ from a
sentence?
o A sentential form is any string of symbols
(terminals/nonterminals) in a derivation. A sentence is a
sentential form with only terminals (e.g., a = b + 1 is a
sentence; <var> = <expr> is a sentential form).
9. How does an attribute grammar help with type checking?
o It assigns types as attributes (e.g., expr.actual_type) and uses
predicates to enforce rules (e.g., expr.actual_type ==
expected_type), catching mismatches like int + string.
10. Why is recursion used in BNF rules for lists
(e.g., <ident_list>)?
 Recursion allows arbitrary-length lists by repeatedly expanding a
nonterminal (e.g., <ident_list> → ident | ident, <ident_list> for lists
like a, b, c).

11. Explain the difference between top-down and bottom-


up parsing. Give an example of each.
o Top-down (e.g., recursive descent): Starts from the start
symbol and expands nonterminals (e.g., predicting if <expr>
then <stmt>).
Bottom-up (e.g., LR parsing): Starts with terminals and
reduces to nonterminals (e.g., reducing id =
expr ; to <assign>).
12. Why is the grammar <expr> → <expr> - <expr> |
const ambiguous? How would you disambiguate it?
o Ambiguity: const - const - const can parse as (const - const) -
const or const - (const - const).
Fix: Use precedence rules (e.g., <expr> → <expr> - <term> |
<term>).
13. What is the purpose of inherited attributes in attribute
grammars? Provide an example.
o Inherited attributes pass context downward
(e.g., <expr>.expected_type ensures type consistency
in <assign> → <var> = <expr>).

14. How does EBNF improve readability over BNF? Illustrate


with an example.
o EBNF replaces recursion with { } for repetition and [ ] for
optional parts.
Example:
BNF: <param_list> → param | param , <param_list>
EBNF: <param_list> → param { , param }.
15. Describe a scenario where static semantics cannot be
checked during parsing and requires a separate phase.
o Example: "Variables must be declared before use" requires a
symbol table, which is built after parsing.

Example-Based Questions
1. Given the BNF rule: <ident_list> → ident | ident, <ident_list>
Show a derivation for the list "x, y, z".
o <ident_list> => ident, <ident_list>
=> ident, ident, <ident_list>
=> ident, ident, ident
=> x, y, z
2. For the ambiguous grammar: <expr> → <expr> - <expr> |
const
Draw two different parse trees for "const - const - const".
o Tree 1: Left-associative
<expr> → <expr> - <expr> → (<expr> - <expr>) - const
Tree 2: Right-associative
<expr> → <expr> - <expr> → const - (<expr> - <expr>)
3. Given Python code: a = 5; b = 6
Identify lexemes and tokens (like the example on PDF page 5).
o Lexemes | Tokens
a | identifier
= | assignment_op
5 | int_literal
; | semicolon
b | identifier
= | assignment_op
6 | int_literal

4. Convert this BNF to EBNF:


<term> → <term> * <factor> | <term> / <factor> | <factor>
o <term> → <factor> { ( | /) <factor> }*
5. An attribute grammar has:
Syntax rule: <var> → id
Semantic rule: <var>.actual_type ← lookup(id)
Explain what happens when processing "int x = y;" if y was
not declared.
o The lookup function would fail to find y's type, triggering a
static semantic error ("undeclared variable") during
compilation, even though the syntax is correct.

6. Given the grammar:


<expr> → <expr> + <term> | <term>
<term> → <term> * <factor> | <factor>
<factor> → (<expr>) | number
Show a leftmost derivation for 2 * (3 + 4).
o <expr> => <term> => <term> * <factor> => <factor> *
<factor> => number * (<expr>) => 2 * (<expr> + <term>)
=> 2 * (<term> + <term>) => 2 * (<factor> + <term>) =>
2 * (number + <term>) => 2 * (3 + <factor>) => 2 * (3 +
number) => 2 * (3 + 4)
7. Convert this BNF to EBNF:
<stmt_list> → <stmt> | <stmt> ; <stmt_list>
o <stmt_list> → <stmt> { ; <stmt> }
8. Identify the lexemes and tokens in if (x > 0) then y = 1;:
o Lexemes | Tokens
if | if_keyword
( | left_paren
x | identifier
> | greater_than_op
0 | int_literal
) | right_paren
then | then_keyword
y | identifier
= | assignment_op
1 | int_literal
; | semicolon
9. Draw the parse tree for A = B + C * D using this
unambiguous grammar:
<assign> → <id> = <expr>
<expr> → <expr> + <term> | <term>
<term> → <term> * <factor> | <factor>
<factor> → <id>
<id> → A | B | C | D
o Tree structure:
<assign>
├─ <id> (A)
├─ =
└─ <expr>
├─ <expr> → <term> → <factor> → <id> (B)
├─ +
└─ <term>
├─ <term> → <factor> → <id> (C)
├─ *
└─ <factor> → <id> (D)
10. An attribute grammar has:
Syntax rule: <assign> → <var> = <expr>
Semantic rule: <expr>.expected_type ← <var>.actual_type
What happens if var is float and expr evaluates to int?
o The predicate <expr>.actual_type ==
<expr>.expected_type fails, triggering a type mismatch error
(e.g., "cannot assign int to float").

11. Given the ambiguous grammar:


<S> → <A>
<A> → <A> + <A> | id
Show two parse trees for id + id + id.
o Tree 1 (left-associative):
<S> → <A> → <A> + <A> → (<A> + <A>) + <A> → (id + id)
+ id
Tree 2 (right-associative):
<S> → <A> → <A> + <A> → id + (<A> + <A>) → id + (id +
id)

12. Convert this recursive BNF rule to EBNF:


<numbers> → number | number , <numbers>
o <numbers> → number { , number }
13. An attribute grammar has:
Syntax rule: <decl> → int <id>
Semantic rule: <id>.type ← int
Predicate: not exists_in_symbol_table(<id>.string)
What happens for int x; int x;?
o The predicate fails on the second declaration, triggering a
"duplicate identifier" error.
14. Given the grammar:
<stmt> → if <expr> then <stmt> else <stmt> | assign
Explain the "dangling else" problem and how to resolve it.
o Problem: if E1 then if E2 then S1 else S2 could pair else with
either if.
Resolution: Use a grammar that forces else to match the
nearest if (e.g., distinguish between matched and
unmatched if statements).
15. Draw the parse tree for a = b * c + d using:
<assign> → <id> = <expr>
<expr> → <expr> + <term> | <term>
<term> → <term> * <factor> | <factor>
<factor> → <id>
<id> → a | b | c | d
o Tree:
<assign>
├─ <id> (a)
├─ =
└─ <expr>
├─ <expr> → <term> → <term> * <factor> → <factor> *
<factor> → b * c
├─ +
└─ <term> → <factor> → d

Bonus: Common Exam Pitfalls


 Misconception: "BNF and EBNF are interchangeable."
o Reality: EBNF is more expressive (e.g., { } for repetition).
 Trap Question: "Can a lexer detect undeclared variables?"
o Answer: No—lexers only identify tokens; semantic analysis
checks declarations.

CHAPTER 4

True/False Questions
1. Lexical analysis is responsible for constructing the parse
tree of a program.
o Answer: False. (Lexical analysis identifies lexemes and
converts them into tokens; syntax analysis constructs the
parse tree.)
2. BNF descriptions are used primarily for lexical analysis.
o Answer: False. (BNF describes syntax, not lexical patterns.)
3. Recursive-descent parsing is a top-down parsing method.
o Answer: True.
4. Bottom-up parsers use leftmost derivations to build parse
trees.
o Answer: False. (Bottom-up parsers use reverse rightmost
derivations.)
5. LR parsers are less powerful than LL parsers in terms of
grammar coverage.
o Answer: False. (LR parsers handle a superset of grammars
compared to LL parsers.)
6. A lexical analyzer uses context-free grammars to identify
tokens.
o Answer: False. (Lexical analyzers use regular grammars;
context-free grammars are for syntax analysis.)
7. EBNF is preferred over BNF for recursive-descent parsers
because it reduces nonterminals.
o Answer: True. (EBNF’s concise notation simplifies parser
code.)
8. Shift-reduce parsers always shift the next token onto the
stack before reducing.
o Answer: False. (They shift until a reducible handle is found.)
9. The lexical analyzer detects semantic errors in the source
code.
o Answer: False. (It only detects lexical/syntax errors; semantic
analysis is later.)
10. LR parsers require backtracking to resolve grammar
conflicts.
o Answer: False. (LR parsers are deterministic and avoid
backtracking.)

11. All programming language grammars can be parsed


using LL(1) parsers.
o Answer: False. (LL(1) parsers only work for a subset of
unambiguous grammars.)
12. The lexical analyzer removes comments from the source
code during tokenization.
o Answer: True. (Comments are typically discarded during
lexical analysis.)
13. In bottom-up parsing, the handle is always the longest
possible substring that matches a production rule.
o Answer: False. (It's the substring that matches a rule and
allows further reduction.)
14. LALR parsers have more states than SLR parsers for the
same grammar.
o Answer: False. (LALR merges states from LR(1), resulting in
fewer states than pure LR(1) but similar to SLR.)
15. The lexical analyzer assigns semantic meaning to
tokens.
o Answer: False. (It only categorizes tokens; semantic analysis
happens later.)

Fill-in-the-Blank Questions

1. The two main parts of syntax analysis are the ________ (low-level)
and the ________ (high-level).
o Answer: lexical analyzer, syntax analyzer/parser
2. A ________ is a substring of the source program that matches a
pattern for a token.
o Answer: lexeme

3. In recursive-descent parsing, each ________ in the grammar has a


corresponding subprogram.
o Answer: nonterminal
4. The ________ parsing method reduces the input string to the start
symbol by replacing handles.
o Answer: bottom-up/shift-reduce
5. A key advantage of separating lexical and syntax analysis is ________
(e.g., simplicity, efficiency, or portability).
o Answer: efficiency (or simplicity/portability)
6. The ________ algorithm is used in LR parsers to determine when to
shift or reduce.
o Answer: lookahead
7. A ________ is a finite automaton with a stack, used to implement
parsers.
o Answer: push-down automaton
8. In the grammar <expr> → <term> {(+ | -) <term>}, the curly
braces denote ________.
o Answer: repetition (0 or more occurrences)
9. The ________ token signals the end of input to the parser.
o Answer: EOF (End-of-File)
10. A state diagram for lexical analysis combines equivalent
transitions using ________ (e.g., "letter" or "digit" classes).
o Answer: character classes

11. The ________ phase follows syntax analysis in a compiler and


checks for type compatibility.
o Answer: semantic analysis
12. In the expression a + b * c, the ________ rule ensures * has
higher precedence than +.
o Answer: grammar/production (e.g., <term> → <factor> {(*
| /) <factor>})
13. A ________ grammar has no ambiguity in production rules for
any input string.
o Answer: unambiguous

14. The ________ table in an LR parser determines whether to shift,


reduce, accept, or error.
o Answer: parsing action
15. The function ________ in a lexical analyzer adds characters to
the current lexeme.
o Answer: addChar
Short Answer/Classic Questions
1. Compare top-down and bottom-up parsing methods.
o Answer:
 Top-down: Begins with the start symbol, uses leftmost
derivations, and builds the parse tree from the root (e.g.,
recursive-descent, LL parsers).
 Bottom-up: Begins with input tokens, uses reverse
rightmost derivations, and builds the parse tree from the
leaves (e.g., LR parsers).
2. Why is BNF suitable for describing syntax?
o Answer: BNF is clear, concise, and enables direct parser
implementation. It simplifies maintenance and supports
unambiguous grammar rules.
3. Explain the role of the lexical analyzer with an example.
o Answer: The lexical analyzer scans source code to group
characters into lexemes (e.g., int value = 5; →
tokens: int (keyword), value (identifier), = (operator), 5 (consta
nt), ; (symbol)).
4. What is a "handle" in bottom-up parsing?
o Answer: A handle is the substring of a right-sentential form
that matches the RHS of a production rule, which can be
reduced to its LHS to progress toward the start symbol.
5. Why are LR parsers preferred over LL parsers?
o Answer: LR parsers handle more grammars, detect errors
earlier, and are equally efficient, making them more versatile
for programming languages.

6. Why is lexical analysis separated from syntax analysis?


o Answer:
 Simplicity: Regular grammars (lex) are simpler than
CFGs (syntax).
 Efficiency: Optimize lexing (e.g., buffering) separately.
 Portability: Lexical rules (e.g., whitespace) may vary
across platforms.
7. Describe how a recursive-descent parser handles the
rule <factor> → id | (<expr>).
o Answer:
 Checks nextToken for ID or INT_CODE → consumes token
if matched.
 If nextToken is LP_CODE, consumes (, calls expr(), then
checks for ).
 Else, throws an error.
8. What is the "dangling else" problem, and how is it resolved
in parsing?
o Answer:
 Ambiguity in nested if-else statements (which else pairs
with which if?).
 Resolution: Grammar rules enforce "else" to match the
nearest unmatched "if."
9. Compare SLR and LALR parsers.
o Answer:
 SLR: Simpler, uses LR(0) states + FOLLOW sets; may
have conflicts.
 LALR: More precise, uses LR(1) lookaheads; merges
states to save space.

10. Trace the shift-reduce steps for parsing a * (b +


c) (Assume grammar: S → S+S | S*S | (S) | a | b | c).
o Answer:
 Stack: $, Input: a*(b+c)$ → Shift a
 Stack: $a, Input: *(b+c)$ → Reduce S → a
 Stack: $S, Input: *(b+c)$ → Shift *, (, b
 Stack: $S*(b, Input: +c)$ → Reduce S → b
 Stack: $S*(S, Input: +c)$ → Shift +, c
 Stack: $S*(S+c, Input: )$ → Reduce S → c, then S → S+S
 Stack: $S*(S), Input: $ → Reduce S → (S), then S → S*S
 Stack: $S, Input: $ → Accept.

11. Explain why left recursion is problematic for top-down


parsers and how to eliminate it.
o Answer:
 Problem: Causes infinite recursion (e.g., A → Aα).
 Solution: Rewrite as right recursion (e.g., A → αA', A' →
αA' | ε).
12. Describe the "shift-reduce conflict" in bottom-up
parsing with an example.
o Answer:
 Conflict: Parser cannot decide whether to shift or
reduce (e.g., in if if else with grammar S → if E S | if E S
else S).
 Resolution: Prefer shifting (to match else to nearest if).
13. What is the purpose of the lookahead token in
predictive parsing?
o Answer: Determines which production rule to apply without
backtracking (e.g., in LL(1), 1-token lookahead suffices).
14. Compare recursive-descent and table-driven LL parsers.
o Answer:
 Recursive-descent: Manual implementation, direct
code for productions.
 Table-driven: Uses parse table and stack; more generic
but less readable.
15. Given the grammar:
S → aSb | ε
Show a parse tree for aabb.
o Answer:
S
/|\
aSb
/|\
aSb
|
ε

Example-Based Question
1. Given the expression (sum + 47) / total, trace the lexical
analyzer’s output.
o Answer:
 Next token: 25 ((), Lexeme: (
 Next token: 11 (sum), Lexeme: sum
 Next token: 21 (+), Lexeme: +
 Next token: 10 (47), Lexeme: 47
 Next token: 26 ()), Lexeme: )
 Next token: 24 (/), Lexeme: /
 Next token: 11 (total), Lexeme: total
 Next token: -1 (EOF), Lexeme: EOF

2. Convert the BNF rule <term> → <factor> {(* | /) <factor>} to


EBNF.
o Answer: <term> → <factor> [ (* | /) <factor> ]*
3. Convert the BNF rule <if_stmt> → if (<expr>) <stmt> [else
<stmt>] to EBNF.
o Answer: <if_stmt> → if (<expr>) <stmt> [else <stmt>]

Error Handling Questions


1.How does a lexical analyzer handle invalid tokens
(e.g., 123abc)?
o Answer: Skips/reports the error and continues scanning from
the next valid character.
2.What recovery strategy does a recursive-descent parser use
after a syntax error?
o Answer: Panic-mode recovery (skip tokens until a
synchronizing token is found).

Mixed-Up Terms (Identify the Odd One Out)


1. Lexeme, Token, Parse Tree, Regular Expression
o Answer: Parse Tree (others relate to lexical analysis).
2. LL Parser, LR Parser, Recursive-Descent, Shift-Reduce
o Answer: Recursive-Descent (others are table-driven or
general methods; recursive-descent is a specific LL
implementation).
3. Handle, Lookahead, Lexeme, Reduce
o Answer: Lexeme (others are bottom-up parsing concepts).
4. Finite Automaton, Push-Down Automaton, Regular
Expression, Parse Tree
o Answer: Parse Tree (others are lexical/syntax analysis tools).
5. Lexical analyzer, Syntax analyzer, Semantic analyzer, Code
generator
o Answer: Semantic analyzer (others are phases in order;
semantic analysis is after syntax).
6. BNF, EBNF, Finite automaton, Parse tree
o Answer: Finite automaton (others describe grammar or
syntax structures).

CHAPTER 5
True/False Questions
1. T/F: In C99, variable names longer than 63 characters are truncated
by the compiler.
o Answer: F (Only the first 63 characters are significant; they
are not truncated.)
2. T/F: Dynamic type binding occurs at compile time in languages like
Python and JavaScript.
o Answer: F (It occurs at runtime.)
3. T/F: Aliases in programming languages are always harmful to
readability.
o Answer: T (They make it harder to track which variables
reference the same memory location.)
4. T/F: Static scoping resolves variable references based on the calling
sequence of subprograms.
o Answer: F (Static scoping is based on the program text’s
structure, not calling sequences.)
5. T/F: In C#, const named constants are bound at runtime.
o Answer: F (const is bound at compile time; readonly is bound
at runtime.)
6. T/F: In Ada, variable names are case-sensitive.
o Answer: F (Ada is not case-sensitive.)
7. T/F: A variable’s scope and lifetime are always the same.
o Answer: F (Example: A static variable in C has a lifetime for
the entire program but scope limited to its function.)
8. T/F: In dynamic type binding, type errors are always caught at
compile time.
o Answer: F (Type errors are detected at runtime.)
9. T/F: C++ allows blocks to create nested scopes, but Java does not.
o Answer: T (Java prohibits variable hiding in nested blocks to
reduce errors.)
10. T/F: Fortran uses implicit declarations for all variables.
o Answer: F (Fortran allows both explicit and implicit
declarations.)
11. T/F: In C#, the var keyword enables dynamic type binding.
o Answer: F (var uses static type inference; the type is
determined at compile time.)
12. T/F: All variables in Python have global scope by default.
o Answer: F (Variables in functions are local unless
declared global.)
13. T/F: In static scoping, the scope of a variable is determined by
its position in the source code.
o Answer: T (Static/lexical scoping depends on the program's
textual structure.)
14. T/F: COBOL's 300 reserved words make it more flexible for
programmers.
o Answer: F (Too many reserved words increase collision risks
and reduce flexibility.)
15. T/F: A variable's address can change during program execution
in some languages.
o Answer: T (e.g., Variables in stack-dynamic languages like C
can have different addresses across calls.)

Fill-in-the-Blank Questions
1. The __________ of a variable is the time during which it is bound to a
specific memory cell.
o Answer: Lifetime
2. In Ruby, variables prefixed with @@ are __________ variables.
o Answer: class
3. A __________ word (like if in Java) cannot be redefined by the
programmer, unlike a __________ word (like Real in Fortran).
o Answer: reserved, keyword
4. The __________ of a variable is its address, while the __________ is its
value.
o Answer: l-value, r-value
5. In dynamic scoping, a variable reference is resolved by searching the
__________.
o Answer: chain of subprogram calls
6. The __________ of a variable is the range of statements where it can
be referenced.
o Answer: Scope
7. In Perl, variable names begin with __________ (e.g., $x).
o Answer: special characters (like $, @, %)
8. A __________ binding is fixed before runtime (e.g., variable types in
Java).
o Answer: static
9. In Python, variables declared outside functions have __________
scope but skip nested functions.
o Answer: global
10. __________ constants (like C++ const) are bound at compile
time and cannot change.
o Answer: Manifest
11. In __________, variable names starting with @ denote instance
variables (e.g., @name).
o Answer: Ruby
12. The __________ of a variable is the set of operations allowed on
its values, determined by its __________.
o Answer: operations, type
13. A __________ variable (e.g., static in C) retains its value
between function calls but is only accessible within its __________.
o Answer: static, scope/function
14. In __________ languages (e.g., JavaScript), type checking occurs
at runtime, which is called __________ typing.
o Answer: dynamically typed, dynamic
15. The term __________ refers to multiple names accessing the
same memory location, created via pointers or references.
o Answer: aliasing
Short Answer/Essay Questions
1. Compare static and dynamic type binding, including
advantages/disadvantages of each.
o Answer:
 Static binding: Occurs at compile time (e.g., explicit
declarations in Java). Advantages: Efficiency, early error
detection. Disadvantages: Less flexibility.
 Dynamic binding: Occurs at runtime (e.g., Python
assignments). Advantages: Flexibility (generic code).
Disadvantages: Runtime overhead, harder debugging.
2. Explain the issue of variable hiding in nested static scopes, with
an example.
o Answer:
 When a variable in an inner scope has the same name as
one in an outer scope, the outer variable is "hidden."
Example:
void foo() {
int x = 5;
{ int x = 10; } // Inner x hides outer x
}
3. Why are aliases problematic for program readability? Provide two
ways aliases can be created.
o Answer:
 Aliases force programmers to track multiple names for
the same memory location. Creation methods:
1. Pointers (e.g., int *p = &x;).
2. Reference variables (e.g., int &r = x; in C++).
4. Describe the difference between scope and lifetime, using a C
static variable as an example.
o Answer:
 Scope: Determines where a variable is visible (e.g., a
static variable in a C function is local to that function).
 Lifetime: Determines how long the variable exists (e.g.,
a static variable persists across function calls).
5. What is the purpose of named constants? Contrast
C#’s const and readonly bindings.
o Answer:
 Named constants improve readability/modifiability. In
C#:
 const: Bound at compile time (e.g., const int X =
5;).
 readonly: Bound at runtime (e.g., readonly int Y =
DateTime.Now.Year;).
6. Explain why dynamic scoping is harder to debug than static
scoping.
o Answer:
Dynamic scoping depends on runtime call chains, making it
unpredictable. For example, a variable x in a function could
refer to different declarations depending on who called it.
Static scoping, being text-based, allows compile-time checks.
7. Compare explicit and implicit variable declarations, giving
language examples.
o Answer:
 Explicit: Types declared by the programmer (e.g., int
x in Java). Ensures reliability.
 Implicit: Types inferred by language (e.g., x = 10 in
Python). Improves writability but risks errors (e.g.,
mistyped names in Fortran create new variables).
8. How do C and C++ handle global variables differently from
Java?
o Answer:
 C/C++: Globals are declared outside
functions; extern shares them across files.
 Java: No true globals; uses public static fields in classes
for similar functionality.
9. Why might a language designer choose static type
binding over dynamic?
o Answer:
Static binding catches errors early (compile-time), improves
performance (no runtime checks), and enhances readability
(clear types).

10. Describe a scenario where aliases are useful despite their


readability issues.
o Answer:
Aliases can optimize memory (e.g., two pointers accessing
large data without copying). Example: C’s union lets variables
share memory for different types.
11. Why might a language designer prefer implicit
declarations? Provide an example where this could lead to
bugs.
o Answer:
Implicit declarations improve writability (e.g., no need to
declare x in Python). However, typos can create unintended
variables (e.g., in Fortran, REAL X vs. X = 10 where X is
implicitly INTEGER).
12. Explain how C's extern keyword relates to variable
scope and binding.
o Answer:
extern declares a variable without defining it, allowing cross-
file access. The variable must be defined once (e.g., int x;) and
can be declared elsewhere (extern int x;). Binding occurs at
link time.
13. Contrast stack-dynamic and heap-dynamic variables in
terms of lifetime and allocation.
o Answer:
 Stack-dynamic: Allocated/deallocated automatically
(e.g., local variables in C). Lifetime matches scope.
 Heap-dynamic: Explicitly allocated (e.g., malloc in C).
Lifetime persists until deallocated (free).
14. How does Python's global keyword alter variable
scoping rules?
o Answer:
Without global, assignments in functions create local variables.
With global, assignments modify the global variable
(e.g., global x inside a function changes x defined outside).
15. Discuss one advantage and one disadvantage of
dynamic type binding.
o Answer:
 Advantage: Flexibility (e.g., a list can hold mixed types
in Python).
 Disadvantage: Runtime errors (e.g., "5" + 3 crashes in
Python but would be caught at compile time in Java).

Bonus: Mixed-Up Terms


(Identify the incorrect term and correct it.)
1. "In PHP, global variables are accessed using
the LOCALS array."
o Correction: $GLOBALS (not LOCALS).
2. "Dynamic scoping is based on the lexical structure of the
program."
o Correction: "Static scoping" (dynamic scoping depends on
calling sequences).
(Identify and fix the mistake.)
3. "In dynamic scoping, a variable’s declaration is found by
checking enclosing blocks in the code."
o Correction: Replace "enclosing blocks" with "call stack"
(dynamic scoping uses runtime calls, not lexical structure).
4. "PHP variables declared outside functions are automatically
global in all functions."
o Correction: They skip nested functions unless
declared global or accessed via $GLOBALS.

Classic Confusion Pairs


(Distinguish between terms.)
1. Keyword vs. Reserved Word:
o Keyword: Context-dependent (e.g., Real in Fortran can be a
type or variable).
o Reserved Word: Always off-limits (e.g., if in Java).
2. L-value vs. R-value:
o L-value: Address (e.g., x in x = 5).
o R-value: Content (e.g., 5 in x = 5).

Terminology Drill
(Match the term to its definition.)
1. Static Binding
o Definition: Association occurs before runtime (e.g., variable
types in Java).
2. Dynamic Scope
o Definition: Variable resolution depends on the call stack (e.g.,
older Lisp variants).
3. Named Constant
o Definition: A variable bound to a fixed value (e.g., const int PI
= 3.14;).

You might also like