0% found this document useful (0 votes)
28 views3 pages

Our Set Question

The document provides a comprehensive overview of Artificial Intelligence (AI), its goals, major problems, and various types of intelligent agents. It covers key concepts in machine learning, including algorithms, evaluation metrics, and challenges, as well as deep learning techniques such as neural networks and optimization methods. Additionally, it discusses practical applications and considerations for algorithm selection in real-world scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views3 pages

Our Set Question

The document provides a comprehensive overview of Artificial Intelligence (AI), its goals, major problems, and various types of intelligent agents. It covers key concepts in machine learning, including algorithms, evaluation metrics, and challenges, as well as deep learning techniques such as neural networks and optimization methods. Additionally, it discusses practical applications and considerations for algorithm selection in real-world scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

1. Define Artificial Intelligence and list its major goals.

AI is the simulation of human intelligence processes by machines. Major goals


include problem-solving, learning, reasoning, perception, and language
understanding.
2. What are the major problems of AI and how are they addressed?
Problems include knowledge representation, reasoning, learning, perception, and
natural language understanding. They are addressed using techniques like search
algorithms, logic, machine learning, and probabilistic models.
3. Describe the different types of intelligent agents.
Types include simple reflex agents, model-based agents, goal-based agents, utility-
based agents, and learning agents.
4. How do goal-based agents differ from utility-based agents?
Goal-based agents act to achieve a goal; utility-based agents consider a utility
function to choose the best action to maximize overall happiness or satisfaction.
5. Differentiate between breadth-first and depth-first search.
Breadth-first explores all nodes at a level before going deeper; depth-first explores as
far as possible along a branch before backtracking.
6. What is depth-limited search and when is it used?
A depth-first search with a fixed depth limit to avoid infinite paths; used when the
search space is infinite or very large.
7. Explain bidirectional search with an example.
Searches simultaneously from start and goal states, meeting in the middle; e.g.,
finding shortest path in a graph faster by searching from both ends.
8. Compare and contrast different uniform search strategies.
Breadth-first is complete but memory intensive; depth-first uses less memory but may
not find shortest path; bidirectional combines both.
9. Describe the A search algorithm and its advantages.
A* uses heuristics to estimate cost from current node to goal, combining path cost and
heuristic for efficient, optimal search.
10. How does simulated annealing work in local search?
Probabilistically accepts worse states to escape local minima, gradually reducing
acceptance probability over time.
11. What are genetic algorithms and how do they work?
Search algorithms inspired by natural selection using crossover, mutation, and
selection to evolve solutions.
12. Explain constraint satisfaction problems with examples.
Problems where variables must satisfy constraints; e.g., Sudoku, map coloring.
13. What is the minimax algorithm in adversarial search?
A decision rule to minimize the possible loss for a worst-case scenario in games.
14. How does alpha-beta pruning optimize minimax?
It cuts off branches that won’t affect the final decision, reducing computations.
15. Define Machine Learning and its types.
ML enables systems to learn from data. Types: supervised, unsupervised, and
reinforcement learning.
16. What is the significance of feature engineering in ML?
Improves model performance by selecting and transforming input variables.
17. What are the challenges of imbalanced data?
Models can be biased toward majority class, reducing minority class accuracy.
18. Explain backpropagation in artificial neural networks.
Algorithm to update weights by propagating error gradients backward through the
network.
19. What is regularization and why is it needed?
Technique to prevent overfitting by adding penalty terms to the loss function.
20. Differentiate between LASSO and Ridge regression.
LASSO uses L1 regularization and can shrink some coefficients to zero; Ridge uses
L2 and shrinks coefficients continuously.
21. Compare KNN, Naïve Bayes, and SVM classifiers.
KNN is instance-based, Naïve Bayes is probabilistic, and SVM finds optimal
separating hyperplane.
22. What is cross-validation and its importance?
Method to assess model generalization by partitioning data into training and testing
sets multiple times.
23. Define and explain Precision, Recall, and F1-score.
Precision: True positives over predicted positives. Recall: True positives over actual
positives. F1-score: harmonic mean of precision and recall.
24. Explain Boosting and Bagging with examples.
Boosting combines weak learners sequentially to improve performance (e.g.,
AdaBoost); Bagging builds multiple models independently and averages results (e.g.,
Random Forest).
25. What is Hidden Markov Model (HMM)?
Statistical model representing systems with hidden states and observable outputs used
for sequence data.
26. Explain linear vs multivariate linear regression.
Linear regression predicts a single output from one feature; multivariate uses multiple
features.
27. What is logistic regression and how is it regularized?
Classification algorithm predicting probabilities with logistic function; regularized
with L1 or L2 to avoid overfitting.
28. Explain decision tree pruning with examples.
Removing branches to reduce overfitting; e.g., pre-pruning stops growth early, post-
pruning removes branches after full growth.
29. What are the strengths and weaknesses of SVM?
Strengths: effective in high dimensions, robust. Weaknesses: computationally
expensive, less effective on noisy data.
30. How does boosting improve weak learners?
Sequentially trains weak learners focusing on previously misclassified data to
improve accuracy.
31. Outline steps in developing a supervised ML algorithm.
Data collection → preprocessing → feature engineering → model selection →
training → evaluation → deployment.
32. What is the role of feature selection in performance?
Reduces dimensionality, improves accuracy, and speeds up training.
33. How does K-means clustering algorithm work?
Partitions data into k clusters by minimizing within-cluster variance.
34. What are the different hierarchical clustering techniques?
Agglomerative (bottom-up) and divisive (top-down) clustering.
35. Explain Principal Component Analysis (PCA).
Dimensionality reduction technique projecting data onto principal components
capturing maximum variance.
36. How do you evaluate clustering algorithms?
Using metrics like silhouette score, Davies-Bouldin index, and within-cluster sum of
squares.
37. Compare different boosting methods.
AdaBoost focuses on misclassified instances; Gradient Boosting fits residual errors;
XGBoost adds regularization for better performance.
38. Describe practical ML applications in real-world systems.
Examples include fraud detection, image recognition, recommendation systems, and
medical diagnosis.
39. What factors influence algorithm choice in ML?
Data size, feature types, model interpretability, training time, and application
requirements.
40. What are key perspectives and challenges in Deep Learning?
Challenges include data requirements, training complexity, interpretability, and
overfitting.
41. Describe a feedforward neural network.
Network with layers where data moves in one direction from input to output.
42. What is an activation function? Compare sigmoid and ReLU.
Activation introduces non-linearity; sigmoid outputs between 0 and 1, ReLU outputs
zero for negative inputs and linear for positive.
43. Explain the backpropagation algorithm.
Computes gradient of loss function w.r.t. weights to update them via gradient descent.
44. What is the role of regularization in training?
Prevents overfitting by penalizing complex models.
45. Discuss optimization algorithms used in DL.
Includes SGD, Adam, RMSProp, which adjust learning rates and momentum for
efficient training.
46. Explain dropout and its purpose in deep models.
Randomly drops neurons during training to reduce overfitting.
47. What is a Recurrent Neural Network (RNN)?
Neural network designed for sequential data, with feedback connections to remember
previous inputs.

You might also like