1.      Mention two practical applications of NER.
2 Marks
2.      With examples explain the different types of NER attributes.                           10 Marks
3.      What do you understand about Natural language processing?                              2 Marks
4.      What are stop words?                 2 Marks
5.      List any two real life applications of NLP.                        2 Marks
6.      Explain the difference between precision and recall in information retrieval.
                                                                         5 Marks
7.      What is NLTK?              2 Marks
8.      What is Multi Word Tokenization?                         2 Marks
9.      What are stems?            2 Marks
10.     What are called affixes?             2 Marks
11.     What is lexicon?           2 Marks
12.     Why is Multi word tokenization preferred over Single word tokenization?                          2 Marks
13.     What is sentence segmentation?                           2 Marks
14.     Why is sentence segmentation important?                            2 Marks
15.     What is morphology in NLP?                     2 Marks
16.     List the different types of morphology available                   2 Marks
17.     What is the difference between NLP and NLU?                        2 Marks
18.     Give some popular examples of Corpus.                    2 Marks
19.     State the difference between word and sentence tokenization?                           2 Marks
20.     What are the phases of problem-solving in NLP?                     5 Marks
21.     Explain the process of word tokenization with example.                       5 Marks
22.     How does Named Entity Recognizer work?                             5 Marks
23.     What are the benefits of eliminating stop words? Give some examples where stop word
elimination may be harmful.             5 Marks
24.     What do you mean by RegEx? Explain with example.                             5 Marks
25.     Explain Dependency Parsing in NLP?                       5 Marks
26.     Write a regular expression to represent a set of all strings over {a, b} of even length. 5
        Marks
27.     Write a regular expression to represent a set of all strings over {a, b} of length 4 starting with
an a.   5 Marks
28.     Write a regular expression to represent a set of all strings over {a, b} containing at least one
a.              5 Marks
29.     Compare and contrast NLTK and Spacy, highlighting their differences.                       5 Marks
30.     What is a Bag of Words? Explain with examples.               5 Marks
31.     Differentiate regular grammar and regular expression.                  5 Marks
32.     Describe the word and sentence tokenization steps with the help of an example. 10 Marks
33.     How can the common challenges faced in morphological analysis in natural language
processing be overcome?             10 Marks
34.    Derive Minimum Edit Distance Algorithm and compute the minimum edit distance between
the words “MAM” and “MADAM”.                 10 Marks
35.     Discuss the problem-solving approaches of any two real-life applications of Information
Extraction and NER in Natural Language Processing.              10 Marks
36.     How to solve any application of NLP. Justify with an example.                    10 Marks
37.     What is Corpora? Define the steps of creating a corpus for a specific task.                10 Marks
38.     What is Information Extraction?                    5 Marks
39.    State the different applications of Sentiment analysis and Opinion mining with examples.
Write down the variations as well.               10 Marks
40.     State a few applications of Information Retrieval.                     5 Marks
41.     What is text normalization?               10 Marks
42.    Do you think any differences present between tokenization and normalization? Justify your
answer with examples.          10 Marks
43.     What makes part-of-speech (POS) tagging crucial in NLP, in your opinion? Give an example
to back up your response.            5 Marks
44.     Criticize the shortcomings of the fundamental Top-Down Parser.                   5 Marks
45.     Do you believe there are any distinctions between prediction and classification? Illustrate
with an example.                5 Marks
46.   Explain the connection between word tokenization and phrase tokenization using examples.
How do both tokenization methods contribute to the development of NLP applications?
      10 Marks
47.     “Natural Language Processing (NLP) has many real-life applications across various
industries.”- List any two real-life applications of Natural Language Processing.      5 Marks
48.     "Find all strings of length 5 or less in the regular set represented by the following regular
expressions:
(a)     (ab + a)*(aa + b)
(b)     (a*b + b*a)*a                                                      5 Marks
49.     "Write regular expressions for the following languages.
1. the set of all alphabetic strings;
2. the set of all lower case alphabetic strings ending in a b;
3. the set of all strings from the alphabet a,b such that each a is immediately preceded by and
immediately followed by a b;                                10 Marks
50.     Explain Rule based POS tagging                 5 Marks
51.     Differentiate regular grammar and regular expression                         5 Marks
52.     What is NLTK?              2 Marks
53.     What is Multi Word Tokenization?                         2 Marks
54.     What is sentence segmentation?                           2 Marks
55.     What is morphology in NLP?                     2 Marks
56.     Give some popular examples of Corpus.                    2 Marks
57.     What do you mean by word tokenization?                             2 Marks
58.     Find the minimum edit distance between two strings ELEPHANT and RELEVANT?
                10 Marks
59.   If str1 = " SUNDAY " and str2 = "SATURDAY" is given, calculate the minimum edit distance
between the two strings.                10 Marks
60.     List the different types of morphology available.                            5 Marks
61.     What is Stemming?                    2 Marks
62.     What is Corpus in NLP?               2 Marks
63.     State with example the difference between stemming and lemmatization.                   5 Marks
64.     Write down the different stages of NLP pipeline.                             10 Marks
65.     What is your understanding about Chatbot in the context of NLP?                         10 Marks
66.     Write short note on text pre-processing in the context of NLP. Discuss outliers and how to
handle them             10 Marks
67.     Explain with example the challenges with sentence tokenization.                         5 Marks
68.     Explain some of the common NLP tasks.            5 Marks
69.     What do you mean by text extraction and cleanup? Discuss with examples.            10 Marks
70.     What is word sense ambiguity in NLP? Explain with examples.              5 Marks
71.     Write short note on Bag of Words (BOW).                    10 Marks
72.     Explain Homonymy with example?                   2 Marks
73.     Define WordNet.                 2 Marks
74.     Consider a document containing 100 words wherein the word apple appears 5 times and
assume we have 10 million documents and the word apple appears in one thousandth of these.
Then, calculate the term frequency and inverse document frequency?                  10 Marks
75.     Explain the relationship between Singular Value Decomposition, Matrix Completion and
Matrix Factorization?            5 Marks
76.     Give two examples that illustrate the significance of regular expressions in NLP. 5 Marks
77.    Why is multiword tokenization preferable over single word tokenization in NLP? Give
examples.             5 Marks
78.     Differentiate between formal language and natural language.              10 Marks
79.     Explain lexicon, lexeme and the different types of relations that hold between lexemes.
                 10 Marks
80.     State the advantages of bottom-up chart parser compared to top-down parsing. 10 Marks
81.     Marks
82.     Describe the Skip-gram model and its intuition in word embeddings.                 10 Marks
83.     Explain the concept of Term Frequency-Inverse Document Frequency (TF-IDF) based ranking
in information retrieval.       10 Marks
84.     Tokenize and tag the following sentence:                   2 Marks
85.     What different pronunciations and parts-of-speech are involved?                    2 Marks
86.     Compute the edit distance (using insertion cost 1, deletion cost 1, substitution cost 1) of
“intention” and “execution”. Show your work using the edit distance grid.                 10 Marks
87.     What is the purpose of constructing corpora in Natural Language Processing (NLP) research?
                5 Marks
88.     What role do regular expressions play in searching and manipulating text data? 5 Marks
89.     Explain the purpose of WordNet in Natural Language Processing (NLP).                10 Marks
90.     What is Pragmatic Ambiguity in NLP?               10 Marks
91.      Describe the class of strings matched by the following regular expressions: a. [a-zA-Z]+ b. [A-
Z][a-z]*                           10 Marks
92.    Extract all email addresses from the following: “Contact us at info@example.com or
support@anothersite.net.”               10 Marks
93.      This regex is intended to match one or more uppercase letters followed by zero or more
digits. [A-Z] + [0-9]* However, it has a problem. What is it, and how can it be fixed?
                                  10 Marks
94.     Write a regex to find all dates in a text. The date formats should include:
DD-MM-YYYY
MM-DD-YYYY
YYYY-MM-DD                                                         10 Marks
95.     Compute the minimum edit distance between the words MAMA and MADAAM. 10 Marks
96.     Evaluate the minimum edit distance in transforming the word ‘kitten’ to ‘sitting’ using
insertion, deletion, and substitution cost as 1.       10 Marks
1.    What are language models?                 2 Marks
2.    Describe the n-gram model with a specific example.                2 Marks
3.    Write two differences between bi-gram and tri-gram models.                2 Marks
4.    What is chain rule of probability?                2 Marks
5.    When we are considering the bigram model what approximation/s do we make to the actual
      formula to calculate probability?                 5 Marks
6.    What is a Markov assumption?              2 Marks
7.    What is maximum likelihood estimation used for?                   2 Marks
8.    Given a word wn and the previous word wn-1, how to normalize the count of bigrams? State
      the formula for the same.                 10 Marks
9.    What is relative frequency in n-gram model?               2 Marks
10.   What are the building blocks of semantic system?                  5 Marks
11.   Discuss lexical ambiguity.                5 Marks
12.   Discuss semantic ambiguity.               5 Marks
13.   Discuss syntactic ambiguity.              5 Marks
14.   What is the need for meaning representation?              5 Marks
15.   What is the major difference between lexical analysis and semantic analysis in NLP?
                        5 Marks
16.   Name two language modelling toolkits.             2 Marks
17.   With examples explain the different types of parts of speech attributes.          5 Marks
18.   Explain extrinsic evaluation of the N-gram model and the difficulties related to it.
                        10 Marks
19.   With an example explain the path-based similarity check for two words.            5 Marks
20.   Define Homonymy, Polysemy and Synonymy with examples.                     5 Marks
21.   How does WordNet assist in extracting semantic information from a corpus? 5 Marks
22.   How does NLP employ computational lexical semantics? Explain.             5 Marks
23.   What are the problems with basic path based similarity measure and how are the
      reformation through information content similarity metrics?               10 Marks
24.   Explain extended Lesk algorithm with example.             5 Marks
25.   State the difference in properties of Rule based POS tagging and Stochastic POS tagging.
                        5 Marks
26.   What is stochastic POS tagging? What are the properties of stochastic POS tagging?
                        10 Marks
27.   What is rule based POS tagging and what are the properties of the same?           10 Marks
28.   Give examples to illustrate how the n-gram approach is utilized in word prediction.
                        10 Marks
29.   Highlight transformation based tagging and the working of the same.               10 Marks
30.   State the difference between structured data and unstructured data.               10 Marks
31.   What is semi-structured data? Explain with an example.            5 Marks
32.   How does a supervised machine learning algorithm contribute to text classification?
                        5 Marks
33.   List the uses of emotion analytics.               5 Marks
34. Say you are an employee of a renowned food delivery company and your superior has asked
     you to do a market survey to search for potential competition and zero down to areas where
     your company needs to improve to be the top company in the market. How will you
    approach this task and accomplish the goal?                         10 Marks
35. Explain a classic search model with a diagram.                      5 Marks
 36. Why is part-of-speech (POS) tagging required in NLP?                       5 Marks
37. What is vocabulary in NLP?                  2 Marks
38. What do you mean by Information Extraction?                         2 Marks
39. What is morphological parsing? Explain the steps of morphological parser?             5 Marks
40. What is BOW (Bag of Words)?                 5 Marks
41. State the difference between formal language and natural language.                   5 Marks
42. Assume there are 4 topics namely, Cricket, Movies, Politics and Geography and 4 documents
     D1, D2, D3 and D4, each containing equal number of words. These words are taken from a
     pool of 4 distinct words namely, {Shah Rukh, Wicket, Mountains, Parliament} and there can
     be repetitions of these 4 words in each document. Assume you want to recreate document
     D3. Explain the process you would follow to achieve this and reason as how recreating
     document D3 can help us understand the topic of D3                 10 Marks
43. What is text parsing?               2 Marks
44. Explain Sentiment analysis in market research?              2 Marks
45. Describe Hidden Markov Models.                      2 Marks
46. State and explain in details the main advantage of Latent Dirichlet Allocation methodology
     over Probabilistic Latent Semantic Analysis for building a Recommender system?
              10 Marks
47. Explain in details as how the Matrix Factorization technique used for building Recommender
     Systems effectively boils down to solving a Regression problem.            5 Marks
48. What are the two main approaches used in computational linguistics for Part of Speech
     (POS) tagging?                     5 Marks
49. What is WordNet?                            2 Marks
50. Describe the hierarchy of relationships in WordNet.                         5 Marks
51. How are morphological operations applied in NLP?                            5 Marks
52. Explain the concept of hypernyms, hyponyms, heteronyms in WordNet.                   10 Marks
53. Discuss the advantages and disadvantages of CBOW and Skip-gram models.               10 Marks
54. Explain the process of text classification, focusing on Naïve Bayes' Text Classification
     algorithm.                10 Marks
55. How do you use naïve bayes model for collaborative filtering?               5 Marks
56. Is lexical analysis different from semantic? How?                   10 Marks
57. Define what N-grams are in the context of Natural Language Processing (NLP). 5 Marks
58. What are word embeddings in the context of Natural Language Processing (NLP)?                10
              Marks
59. What is "vector semantics" in NLP, and why is it useful for understanding word meanings?
                       10 Marks
60. Discuss a significant limitation of TF-IDF.         2 Marks
61. Discuss the application of regular expressions in Natural Language Processing (NLP),
    emphasizing their role in text processing tasks. Provide examples.                   5 Marks
62. Explain the concept of N-grams in NLP and with examples discuss their importance in
    language modelling to demonstrate how N-grams capture sequential patterns in text data
                     10 Marks
63. Explain the significance of n-grams in the design of any text classification system using
    examples.                5 Marks
64. Discuss the disadvantage of uni-gram in information extraction.              5 Marks
65. Define homographs and provide an example.                  2 Marks
66. How is the Levenshtein distance algorithm used to find similar words to a given word?
                      10 Marks
67. Define heteronyms and provide an example.                  2 Marks
68. Explain the concept of polysemy and provide an example.                              2 Marks
69. Define synonyms and antonyms and provide examples of each.                           2 Marks
70. We are given the following corpus:
    <s> I am sam </s>
    <s> Sam I am </s>
    <s> I am Sam </s>
    <s> I do not like green eggs and Sam</s>
    Using a bigram language model with add-one smoothing, what is P(Sam | am)? Include <s> &
    </s> in your counts just like any other token.                               10 Marks
71. Comment on the validity of the following statements:
    a) Rule-based taggers are non-deterministic
    b) Stochastic taggers are language independent
    c) Brill’s tagger is a rule-based tagger                                           10 Marks
1. In the context of natural language processing, how can we leverage the concepts of TF-IDF,
    training set, validation set, test set, and stop words to improve the accuracy and
    effectiveness of machine learning models and algorithms? Additionally, what are some
    potential challenges and considerations when working with these concepts, and how can we
    address them? 5 Marks
2. Define text classification.                  2 Marks
3. Describe the ways of Information Extraction from unstructured text.                   5 Marks
4. Explain ad-hoc retrieval problems.                   2 Marks
5. What aspects of ad-hoc retrieval problems are addressed by Information Retrieval research?
             2 Marks
6. What are the contents of an Information Retrieval model?                      2 Marks
7. What is an inverted index? 2 Marks
8. Describe how hand-coded rules help in performing text classification.                 5 Marks
9. What are the machine learning approaches used for text classification?                5 Marks
10. What is/are the drawback/s of the Naive Bayes classifier?                    5 Marks
11. Explain the result of Multinomial Naïve Bayes Independence Assumptions.              5 Marks
12. Write two NLP applications where we can use the bag-of-words technique.              5 Marks
13. What is the problem with the maximum likelihood for the Multinomial Naive Bayes
    classifier? How to resolve?                 10 Marks
14. Explain the confusion matrix that can be generated in terms of a spam detector. 5 Marks
15. How k-fold cross validation is used for evaluating a text classifier?                5 Marks
16. Explain practical issues of a text classifier and how to solve them.                 5 Marks
17. What are the types of Text classification techniques?               5 Marks
18. Give any 3 different evaluation metrics available for text classification. Explain with
    examples.                 10
19. What are the evaluation measures to be undertaken to judge the performance of a matrix?
                   2 Marks
20. With a schematic diagram explain Word2vec type of word embedding.               5 Marks
21. Explain the working of Doc2Vec type of word embedding with labelled diagram. 5 Marks
22. With example explain the following word to sequence analysis:-              5 Marks
    a) vector semantic
    b) probabilistic language model
23. Define opinion mining.          2 Marks
24. What are the aspects taken into account while collecting feedback of brands for sentiment
    analysis?                 5 Marks
25. What is intent analysis?           2 Marks
26. Explain emotion analysis.                   2 Marks
27. How does emotional analytics work?                   5 Marks
28. Naïve Bayes classifier is not so naïve – explain.            5 Marks
29. With detailed steps explain the working of Multinomial Naive Bayes learning. 5 Marks
30. What is micro averaging and macro averaging? Explain with an example.             10 Marks
31. State 3 opinion mining techniques with proper explanation.                10 Marks
32. What issue crops up for Information Retrieval based on keyword search in case of a huge
    size document?            5 Marks
33. What are the initial stages of text processing?              10 Marks
34. What is the goal of an IR system?                    10 Marks
35. What are the different ways to use Bag-of-words representation for text classification?
                     10 Marks
36. State the difference between sentiment analysis, intent analysis and emotion analysis.
                     10 Marks
37. How is sentiment analysis used by different brands to assess the status of the market after
    launching a product?               10 Marks
38. Mention few practical application of emotion analysis by emotion recognition. 10 Marks
39. Step by step explain how Naive Bayes classifier can be used for text classification.
                     10 Marks
40. What are the 4 steps of text normalization?                  5 Marks
41. Highlight practical applications of text classification concept.          10 Marks
42. What is Named Entity Recognition (NER)?                      2 Marks
43. How is Named Entity Recognition useful in NLP applications?               5 Marks
44. How k-fold cross validation is used for evaluating a text classifier.             10 Marks
45. Explain the fundamental concepts of Natural Language Processing (NLP) and discuss its
    significance in today's digital era, providing examples of real-world applications and
    potential future advancements.              5 Marks
46. What is Ambiguity? Explain different types of ambiguity in NLP.           5 Marks
47. What are the benefits of a text classification system? Give an example.           5 Marks
48. Explain the Building Blocks of Semantic System?              5 Marks
49. What is NLTK? How is it different from Spacy?                5 Marks
50. Explain Dependency Parsing in NLP?                   10 Marks
51. What are the steps involved in pre-processing data for NLP?               5 Marks
52. What are some common applications of chatbots in various industries?              10 Marks
53. Compute the minimum edit distance in transforming the word DOG to COW using
    Levenshtein distance, i.e., insertion = deletion =1 and substitution = 2.         10 Marks
54. What are word embedding in NLP and how can they be used in various NLP applications?
                     10 Marks
    55. Do you believe there are any distinctions between prediction and classification? Illustrate
        with an example.                 5 Marks
    56. How do lexical resources like WordNet contribute to lexical semantics in NLP? How does
        lexical ambiguity impacts NLP tasks such as machine translation or sentiment analysis?
                        5 Marks
    57. Analyze the purpose of topic modeling in text analysis.          5 Marks
    58. Given the following dataset, classify whether a new email is spam or not using Naïve Bayes.
Email Contains "Offer" Contains "Win" Contains "Money" Spam (Yes=1, No=0)
1     Yes              Yes            No               1
2       Yes                  No                  Yes                    1
3       No                   Yes                 No                     0
4       Yes                  No                  No                     0
5       No                   Yes                 Yes                    0
Using Naive bayes, predict whether the email (Offer = Yes, Win = Yes, Money = Yes) is
spam or not.                                                    10 Marks
    59. A company wants to classify customer feedback as "Positive" or "Negative" based on word
        occurrences. The training dataset is:
                Contains        Contains             Contains          Sentiment (Positive = 1,
Feedback
                  "Good"          "Fast"             "Cheap"                Negative = 0)
1             Yes             Yes               No                 1
2             No              Yes               Yes                1
3             Yes             No                No                 0
4             No              No                Yes                0
Given a new feedback (Good = Yes, Fast = Yes, Cheap = No), use Naive Bayes to classify
whether the sentiment is positive or negative.                        10 Marks
    60. A weather dataset is given for predicting whether a person will play tennis.
Day Outlook Temperature Humidity Wind Play Tennis
1    Sunny      Hot          High       Weak No
2    Sunny      Hot          High       Strong No
3    Overcast Hot            High       Weak Yes
4    Rain       Mild         High       Weak Yes
5    Rain       Cool         Normal     Weak Yes
Day Outlook Temperature Humidity Wind Play Tennis
6       Rain    Cool           Normal   Strong No
Using Naive Bayes, classify whether a person will play tennis if the weather conditions are:
         Outlook = Rain
         Temperature = Mild
         Humidity = High
         Wind = Strong                                             10 Marks
1. Examine the broad classification of Recommendation systems?                 5 Marks
2. Explain the working principle of Content Based recommendation system.               5 Marks
3. Explain the working principle of collaborative filtering system.                    5 Marks
4. Define the evaluation metrics of the recommendation system.                          2 Marks
5. Give the definition of Hybrid recommendation systems.               2 Marks
6. What are Conversational Agents?                              2 Marks
7. What is text Summarization?                          2 Marks
8. Explain Item-Based Collaborative Filtering?                          5 Marks
9. State some applications of topic modelling.                          2 Marks
10. What’s the need for text summarization?                             2 Marks
11. A Chatbot is known as a Conversational agent- Explain.                      5 Marks
12. What is the advantage of artificial intelligence in chatbots?                       5 Marks
13. State the concept of Retrieval-based model.              2 Marks
14. Define Question answering system.                           2 Marks
15. Give some examples of Question answering                                    2 Marks
    system.
16. What is User-Based Collaborative Filtering?                         2 Marks
17. State the concept of Information retrieval (IR) based question answering.          2 Marks
18. Compare Information Retrieval and Web Search.                               5 Marks
19. Define Recommendation based on User Ratings using appropriate example. 5 Marks
20. Describe sentiment analysis with an example.                        5 Marks
21. Explain the different types of recommendation systems .                             5 Marks
22. Explain the concept of the Recommendation System with real-life examples. 5 Marks
23. Illustrate two kinds of conversational agents.                      5 Marks
24. Explain Collaborative Recommendation System with example.                           5 Marks
25. Describe the most common use-cases of sentiment analysis?                           5 Marks
26. What are steps involved in Latent Dirichlet Allocation?                     5 Marks
27. Describe Twitter sentiment analysis.                        5 Marks
28. Define the Chatbot Architectures.                           5 Marks
29. Illustrate Multi document summarization.                            5 Marks
30. Define topic modelling.                    5 Marks
31. Describe Extraction-based summarization.                            5 Marks
32. State the differences between Extraction-based summarization and Abstraction-based
    summarization.                             5 Marks
33. Classify Recommendation techniques with examples.                           10 Marks
34. Illustrate different Summarization techniques.                      10 Marks
35. Explain the Use-Cases of the Recommendation System.                         10 Marks
36. Differentiate collaborative filtering and content-based systems.                    10 Marks
37. Define the steps of sentiment analysis.                     10 Marks
38. What is LDA and how is it different from others?                            10 Marks
39. With example illustrate Abstractive summarization.                          10 Marks
40. Suppose you have the following set of sentences:                            10 Marks
41. Illustrate the advantages and disadvantages of a Content-based and collaborative filtering
    recommendation system.                              10 Marks
42. In the context of natural language processing, how can we leverage the concepts of TF-IDF,
    training set, validation set, test set, and stop words to improve the accuracy and
    effectiveness of machine learning models and algorithms? Additionally, what are some
    potential challenges and considerations when working with these concepts, and how can
    we address them?                           10 Marks
43. Describe how extraction-based and abstraction-based summarizations vary from one
    another. How would you go about creating an extractive summarization system?
                      10 Marks
44. Explain how pretraining techniques such as GPT (Generative Pretrained Transformer)
    contribute to improving natural language understanding tasks. Discuss the key components
    and training objectives of GPT models,                     10 Marks
45. How do you use naïve bayes model for collaborative filtering?                     10 Marks
46. Is lexical analysis different from semantic? How?                         5 Marks
47. How does NLP help in sentiment analysis.                          2 Marks
48. Define what N-grams are in the context of Natural Language Processing (NLP).2 Marks
49. How are N-grams utilized in Natural Language Processing (NLP)? 2
    Marks
50. What is meant by data augmentation?                       2 Marks
51. How would you create a Recommender System for text inputs?                         5 Marks
52. Discuss how the popular word embedding technique Word2Vec is implemented in the
    algorithms: Continuous Bag of Words (CBOW) model, and the Skip-Gram model.
                     5 Marks
53. Differentiate collaborative filtering and content-based systems.                   10 Marks
54. Explain the Use-Cases of the Recommendation System.                        10 Marks
55. Analyze different chatbot architectures in NLP, such as rule-based, retrieval-based, and
    generative models, assessing their effectiveness based on scalability, response quality, and
    adaptability                       10 Marks
56. Discuss how ChatGPT utilizes large-scale pretraining and transformer-based architectures to
    generate contextually relevant responses.                         5 Marks
57. Elaborate Collaborative Recommendation System with example. 5
    Marks
58. Describe five different applications of Natural Language Processing (NLP) in various fields
    such as healthcare, finance, customer service, and education                       5 Marks
59. Explain the importance of natural language understanding (NLU) in chatbot development?
                              5 Marks
60. Explain the architecture of the ChatGPT model in Natural Language Processing (NLP).
                     5 Marks
61. What are some of the ways in which data augmentation can be done in NLP projects?
                     5 Marks
62. Compare Information Retrieval and Web Search.                              5 Marks
63. Explain Item-Based Collaborative Filtering?                       5 Marks
64. What is dialogue management in a chatbot model.                            2 Marks
65. How do pre-trained language models like GPT-3 contribute to chatbot development?
                     10 Marks
66. What is User-Based Collaborative Filtering?                       2 Marks
67. Can statistical techniques be used to perform the task of machine translation? If so, explain
    in brief                10 Marks
68. Explain text summarization and multiple document text summarization with neat diagram
                            10 Marks
69. With example illustrate Abstraction-based summarization.                          5 Marks
70. Illustrate the advantages and disadvantages of a Content-based and collaborative filtering
    recommendation system.                          5 Marks