0% found this document useful (0 votes)
9 views6 pages

2nd Unti QN

The document contains a series of questions and answers related to lexical analysis, finite automata, and regular expressions. It covers topics such as the advantages of lexical-analyser generators, the purpose of error recovery, and the differences between deterministic and nondeterministic finite automata. Additionally, it discusses methods for converting regular expressions to finite automata and the goal of DFA minimization.

Uploaded by

sethuramanr1976
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

2nd Unti QN

The document contains a series of questions and answers related to lexical analysis, finite automata, and regular expressions. It covers topics such as the advantages of lexical-analyser generators, the purpose of error recovery, and the differences between deterministic and nondeterministic finite automata. Additionally, it discusses methods for converting regular expressions to finite automata and the goal of DFA minimization.

Uploaded by

sethuramanr1976
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1.

What is the advantage of using a lexical-analyser generator instead of implementing a


lexical analyser manually?
a) It eliminates the need for regular expressions
b) It allows modifying the analyser by changing only the affected patterns
c) It makes programming in assembly language easier
d) It removes the need for token identification

2. Which notation is used to specify lexeme patterns in a lexical-analyser generator?


a) Context-free grammar
b) Syntax trees
c) Regular expressions
d) Machine code

3. What is the final step in transforming lexeme patterns into a functional lexical analyser?
a) Converting regular expressions into assembly language
b) Transforming nondeterministic automata into deterministic automata
c) Manually coding each lexeme pattern in C++
d) Using a lexical-analyser generator to produce machine code

4. Which of the following best describes a lexeme?


a) A sequence of characters that matches a pattern for a token
b) A set of rules that define the grammar of a language
c) The final compiled output of a lexical analyzer
d) A high-level representation of an abstract syntax tree

5. What is the purpose of error recovery in lexical analysis?


a) To generate an optimized machine code
b) To correct syntax errors before parsing
c) To handle situations where no token pattern matches the input
d) To improve the execution speed of a program

6. What is the key difference between lexical analysis and parsing?


a) Lexical analysis focuses on identifying syntax errors, while parsing only removes
comments.
b) Lexical analysis converts source code into machine code, while parsing groups characters
into lexemes.
c) Lexical analysis identifies lexemes and generates tokens, while parsing checks the syntax
of token sequences.
d) Lexical analysis directly executes the program, while parsing generates an abstract syntax
tree.

7. Why is a two-buffer scheme used in lexical analysis?

a) To store multiple source programs in memory


b) To efficiently handle large lookaheads while scanning tokens
c) To eliminate the need for a parser in compilation
d) To execute source code directly without translation

8. What is the purpose of using sentinels in buffer pairs?

a) To mark the beginning of each token in the source program


b) To avoid extra checks for buffer-end conditions while scanning input
c) To improve the accuracy of regular expression matching
d) To prevent overwriting of lexemes in memory

9. In the two-buffer scheme, how does the "forward" pointer function?

a) It marks the beginning of the lexeme being processed


b) It scans ahead to find the next lexeme and determines token boundaries
c) It moves backward to check for operator precedence
d) It resets to the start of the file after every lexeme is processed

10. The following diagram represents different parts of a string:

banana

/ | \

ba nan na

Which of the following statements is true regarding prefixes, suffixes, and substrings?

a) "nan" is a prefix of "banana."


b) "ba" is a suffix of "banana."
c) "nana" is a substring of "banana."
d) "banana" is not a proper prefix of itself.

11. Which of the following is NOT a valid operation for strings?

a) Concatenation
b) Union
c) Exponentiation
d) Subtraction

12. Which of the following describes the union of two languages L and M?

a) L[M = {s | s is in L or s is in M}
b) L[M = {s | s is in L and t is in M}
c) L[M = {s | s is in L and t is not in M}
d) L[M = {s | s is not in L and t is in M}

13. What does the Kleene closure operation on a language L (denoted L*) represent?

a) The set of strings formed by concatenating L zero or more times


b) The set of strings formed by concatenating L one or more times
c) The set of strings formed by repeating each element in L
d) The set of strings that includes only the empty string

14. What is the purpose of the transition diagram in lexical analysis?


a) To convert regular expressions into patterns for tokens
b) To simulate the parsing of the grammar
c) To represent the lexical analyzer's logic in visual form
d) To assign variable names to each token

15. How are keywords such as if, then, and else treated differently from identifiers in the
lexical analyser?
a) They are treated the same as identifiers and stored in the symbol table.
b) They are explicitly recognized using separate transition diagrams.
c) They are ignored during lexical analysis.
d) They are treated as comments.

16. What is the primary function of a syntax analyser in a compiler?

a) It checks the correctness of variable names.


b) It breaks the source code into tokens.
c) It generates machine code from the source code.
d) It checks the grammatical structure of the source code.

17. What does the Lex compiler do with the patterns defined in a Lex program?

a) it directly converts the patterns into machine code.


b) it transforms the regular expressions into a transition diagram and generates
corresponding c code.
c) it assigns attribute values to each token based on the input.
d) it compiles the input patterns into an executable binary.

18. What does the Lex program’s action part typically contain?

a) regular expressions only.


b) functions in c, which are executed when a pattern matches the input.
c) integer codes for each token.
d) a sequence of input characters that need to be processed.

19. What is the purpose of the "%%" separator in a Lex program?

a) it separates regular definitions from the pattern-action rules.


b) it separates the declaration section from the auxiliary function section.
c) it marks the beginning and end of a Lex program.
d) it denotes the start of the main program in the Lex language.

20. Which of the following statements is true about nondeterministic finite automata
(NFA)?
a) An NFA accepts a string only if there is a single path labeled by that string leading to an
accepting state.
b) An NFA can have multiple edges with the same symbol leaving a state.
c) An NFA can only recognize languages that a deterministic finite automaton (DFA) cannot
recognize.
d) In an NFA, the empty string (ε) cannot be a transition label.

21. What is the primary difference between a deterministic finite automaton (DFA) and a
nondeterministic finite automaton (NFA)?

a) A DFA can have multiple edges for a symbol leaving a state, whereas an NFA can have only
one edge.
b) A DFA requires that each state have exactly one edge for each symbol, while an NFA may
have multiple edges for the same symbol or even no edge.
c) An NFA is used in lexical analyzers, while DFAs are used for recognition purposes only.
d) A DFA does not have a transition function, but an NFA does.

22. In the context of finite automata, what does the transition function define?

a) It determines the input string that the automaton will accept.


b) It defines the path the automaton follows based on the current state and input symbol.
c) It defines the accepting states of the automaton.
d) It provides the algorithm for recognizing a string.

23. In the subset construction for converting an NFA to a DFA, what does each state of the
constructed DFA represent?

a) A single NFA state


b) A set of NFA states
c) A transition between NFA states
d) A unique input symbol

24. In the context of the NFA to DFA conversion, what is the purpose of the closure
operation, -closure(s)?

a) It finds the set of NFA states reachable from state s on a given input symbol
b) It finds the set of NFA states reachable from state s by following epsilon (ε) transitions
alone
c) It computes the set of all states reachable from any state of the NFA
d) It marks all accepting states in the NFA

25. Which of the following is a key step in simulating an NFA using a DFA?

a) Making transitions based on the input string


b) Directly processing the input string without using any states
c) Ignoring epsilon transitions
d) Simulating only one NFA state at a time

26. Which state is considered important in an NFA?

a) A state with no outgoing transitions.


b) A state that leads to a dead end.
c) A state with at least one non-outgoing transition.
d) A state that is accepting.

27. Which state in an NFA is not important?

a) Start state.
b) Any state with a transition on #.
c) The accepting state with no outgoing transitions.
d) A state that has outgoing transitions.

28. What is the purpose of the followpos(p) function?

a) To find the positions that can follow a specific position p.


b) To check if a node is nullable.
c) To find positions that are unreachable.
d) To track the start position of a string.

29. Which of the following is a method used to convert a regular expression to a finite
automaton?

a) Subset construction
b) Pumping Lemma
c) Thompson’s construction
d) Context-free grammar parsing

30. What is the goal of DFA minimization?

a) To reduce the number of states and transitions in the DFA


b) To convert a DFA into an NFA
c) To simplify the regular expression
d) To add more states for error handling

You might also like