What is the primary function of a lexical analyzer?
a) Parsing syntax errors
b) Generating machine code
c) Tokenizing the input source code
d) Performing semantic analysis
Answer: c) Tokenizing the input source code
Which of the following is not a function of a lexical analyzer?
a) Tokenization
b) Syntax checking
c) Removing comments
d) Removing whitespace
Answer: b) Syntax checking
What does a lexical analyzer output as a result of its processing?
a) Abstract syntax tree
b) Token stream
c) Machine code
d) Symbol table
Answer: b) Token stream
Which component is responsible for converting regular expressions into finite state machines?
a) Syntax analyzer
b) Lexical analyzer
c) Code generator
d) Optimizer
Answer: b) Lexical analyzer
In the context of lexical analysis, what is a “scanner”?
a) A tool for generating syntax trees
b) A component that converts source code into tokens
c) A program that generates machine code
d) A tool for optimizing code
Answer: b) A component that converts source code into tokens
Which data structure is commonly used by lexical analyzers to store tokens?
a) Stack
b) Queue
c) Array
d) List
Answer: c) Array
What role does the “lexeme” play in lexical analysis?
a) It is the process of generating tokens
b) It is a sequence of characters that forms a token
c) It is a type of machine code
d) It is used for optimizing code
Answer: b) It is a sequence of characters that forms a token
Which of the following tools is used to create lexical analyzers from regular expressions?
a) Yacc
b) Bison
c) Flex
d) Gcc
Answer: c) Flex
What does a lexical analyzer do with comments and whitespace in the source code?
a) Translates them into tokens
b) Ignores and removes them
c) Converts them into intermediate code
d) Generates syntax errors
Answer: b) Ignores and removes them
What is the primary purpose of a finite state machine (FSM) in lexical analysis?
a) To perform syntax checking
b) To generate intermediate code
c) To recognize patterns and sequences in the source code
d) To optimize code
Answer: c) To recognize patterns and sequences in the source code
Which of the following is a common output of a lexical analyzer?
a) Abstract syntax tree
b) Token stream
c) Intermediate code
d) Object code
Answer: b) Token stream
What is the role of a symbol table in lexical analysis?
a) To store the token patterns
b) To manage identifiers and their attributes
c) To generate syntax trees
d) To convert tokens to machine code
Answer: b) To manage identifiers and their attributes
Which type of errors does a lexical analyzer primarily detect?
a) Syntax errors
b) Semantic errors
c) Lexical errors
d) Runtime errors
Answer: c) Lexical errors
What is a “token” in the context of lexical analysis?
a) A sequence of machine code instructions
b) A basic unit of syntax such as keywords or operators
c) A type of syntax error
d) An intermediate code representation
Answer: b) A basic unit of syntax such as keywords or operators
Which phase directly follows lexical analysis in the compilation process?
a) Semantic analysis
b) Syntax analysis
c) Code generation
d) Optimization
Answer: b) Syntax analysis
What is a “regular expression” used for in lexical analysis?
a) To define patterns for tokens
b) To generate machine code
c) To create syntax trees
d) To optimize intermediate code
Answer: a) To define patterns for tokens
How does a lexical analyzer handle numeric literals?
a) Converts them into tokens representing their values
b) Ignores them
c) Parses them into syntax trees
d) Generates error messages
Answer: a) Converts them into tokens representing their values
What does “lexical scope” refer to in the context of lexical analysis?
a) The range within which tokens are valid
b) The depth of the syntax tree
c) The extent of memory usage
d) The range of regular expressions used
Answer: a) The range within which tokens are valid
Which tool is commonly used for specifying lexical rules in a language?
a) Bison
b) Yacc
c) Lex
d) Gcc
Answer: c) Lex
What is a “lexical unit” in lexical analysis?
a) A type of syntax error
b) A sequence of characters that forms a token
c) A part of the machine code
d) An intermediate code representation
Answer: b) A sequence of characters that forms a token
How does a lexical analyzer differentiate between keywords and identifiers?
a) By using syntax rules
b) By using context-sensitive rules
c) By using regular expressions
d) By generating intermediate code
Answer: c) By using regular expressions
What is the function of “token classification” in lexical analysis?
a) To categorize tokens into types such as keywords, operators, and identifiers
b) To generate syntax trees
c) To optimize intermediate code
d) To create machine code
Answer: a) To categorize tokens into types such as keywords, operators, and identifiers
What is the main purpose of handling escape sequences in lexical analysis?
a) To handle special characters within string literals
b) To optimize code performance
c) To create syntax trees
d) To convert tokens to machine code
Answer: a) To handle special characters within string literals
Which of the following is not a function of a lexical analyzer?
a) Tokenization
b) Syntax checking
c) Comment removal
d) Whitespace removal
Answer: b) Syntax checking
What is a “token stream” produced by a lexical analyzer?
a) A list of machine code instructions
b) A list of recognized tokens and their attributes
c) A sequence of syntax errors
d) A set of intermediate representations
Answer: b) A list of recognized tokens and their attributes
What is the purpose of a “lexer” in lexical analysis?
a) To parse tokens into syntax trees
b) To generate intermediate code
c) To perform tokenization and produce tokens
d) To optimize code
Answer: c) To perform tokenization and produce tokens
How does lexical analysis handle multi-character operators like <= or !=?
a) By treating them as separate tokens
b) By combining them into a single token
c) By ignoring them
d) By converting them to machine code
Answer: b) By combining them into a single token
Which phase of compilation directly deals with identifying variable names and their attributes?
a) Lexical analysis
b) Syntax analysis
c) Semantic analysis
d) Code generation
Answer: a) Lexical analysis
Which of the following best describes “lexical errors”?
a) Errors related to syntax structure
b) Errors related to invalid or unrecognized tokens
c) Errors related to runtime exceptions
d) Errors related to semantic mismatches
Answer: b) Errors related to invalid or unrecognized tokens
What does a lexical analyzer do when it encounters a string literal in the source code?
a) Generates a syntax error
b) Converts it into a token representing the string
c) Replaces it with a regular expression
d) Ignores it
Answer: b) Converts it into a token representing the string
What is the role of a “token table” in lexical analysis?
a) To store patterns for token recognition
b) To manage recognized tokens and their attributes
c) To generate syntax trees
d) To convert tokens to intermediate code
Answer: b) To manage recognized tokens and their attributes
Which of the following is not typically considered a lexical component?
a) Keywords
b) Identifiers
c) Operators
d) Control flow structures
Answer: d) Control flow structures
What is the significance of “lookahead” in lexical analysis?
a) To predict future tokens
b) To handle multi-character tokens
c) To perform semantic checks
d) To generate syntax trees
Answer: b) To handle multi-character tokens
How does a lexical analyzer treat white spaces in the source code?
a) It generates tokens for them
b) It removes them and does not generate tokens
c) It converts them into machine code
d) It treats them as syntax errors
Answer: b) It removes them and does not generate tokens
What is the main function of “token generation” in lexical analysis?
a) To convert tokens into machine code
b) To generate syntax trees from tokens
c) To produce tokens from source code sequences
d) To optimize token sequences
Answer: c) To produce tokens from source code sequences
What kind of errors can a lexical analyzer identify?
a) Syntax errors
b) Semantic errors
c) Lexical errors
d) Runtime errors
Answer: c) Lexical errors
In lexical analysis, what is meant by “token stream”?
a) A sequence of machine code instructions
b) A sequence of recognized tokens and their attributes
c) A list of syntax errors
d) A set of intermediate representations
Answer: b) A sequence of recognized tokens and their attributes
Which tool is used to define the lexical rules for a language using regular expressions?
a) Lex
b) Yacc
c) Bison
d) Gcc
Answer: a) Lex
How does lexical analysis contribute to error reporting in the compilation process?
a) By generating runtime error messages
b) By providing detailed syntax error reports
c) By identifying and reporting invalid tokens
d) By optimizing code for performance
Answer: c) By identifying and reporting invalid tokens
Which of the following best describes the purpose of “lexical rules”?
a) To define patterns for token recognition
b) To perform syntax analysis
c) To generate machine code
d) To optimize intermediate code
Answer: a) To define patterns for token recognition
What is the role of “comment removal” in lexical analysis?
a) To handle special characters in tokens
b) To ignore non-essential parts of the code
c) To generate syntax trees
d) To convert tokens into machine code
Answer: b) To ignore non-essential parts of the code
What does “token recognition” involve in lexical analysis?
a) Identifying and categorizing sequences of characters as tokens
b) Generating intermediate code
c) Creating syntax trees
d) Performing runtime checks
Answer: a) Identifying and categorizing sequences of characters as tokens
Which aspect of a token does a lexical analyzer typically handle?
a) Syntax rules
b) Machine code generation
c) Token classification and categorization
d) Runtime error handling
Answer: c) Token classification and categorization
What is an example of a lexical analyzer tool?
a) Bison
b) Flex
c) Yacc
d) Gcc
Answer: b) Flex
What does a lexical analyzer do with keywords in source code?
a) Converts them into numeric values
b) Categorizes them as tokens and processes them accordingly
c) Generates syntax trees
d) Translates them into machine code
Answer: b) Categorizes them as tokens and processes them accordingly
How does a lexical analyzer handle identifiers?
a) Converts them into numeric values
b) Recognizes and categorizes them as tokens
c) Ignores them
d) Generates machine code
Answer: b) Recognizes and categorizes them as tokens
What is the role of “token attributes” in lexical analysis?
a) To store the values of tokens
b) To generate syntax trees
c) To convert tokens into machine code
d) To perform semantic analysis
Answer: a) To store the values of tokens
Which of the following is an example of a lexical token?
a) Variable name
b) Syntax tree node
c) Intermediate code instruction
d) Runtime error
Answer: a) Variable name
How does a lexical analyzer treat string literals?
a) Converts them into tokens representing their content
b) Ignores them
c) Generates syntax errors
d) Converts them into numeric values
Answer: a) Converts them into tokens representing their content
What is the primary purpose of using regular expressions in lexical analysis?
a) To define patterns for token recognition
b) To generate machine code
c) To optimize intermediate code
d) To create syntax trees
Answer: a) To define patterns for token recognition