Lab Sheet
Artificial Intelligence
1. Introduction to Machine Learning: Linear Regression
      Objective: Understand the basics of linear regression and implement a model to predict
       values.
      Code:
      import numpy as np
      import matplotlib.pyplot as plt
      from sklearn.linear_model import LinearRegression
   
      # Generate sample data
      X = np.array([1, 2, 3, 4, 5]).reshape(-1, 1)
      y = np.array([1, 2, 3, 4, 5])
   
      # Create and train the model
      model = LinearRegression()
      model.fit(X, y)
   
      # Predict values
      y_pred = model.predict(X)
   
      # Plot
      plt.scatter(X, y, color='blue')
      plt.plot(X, y_pred, color='red')
      plt.show()
      Task: Modify the code to work with a different dataset, such as predicting housing
       prices.
2. Classification: K-Nearest Neighbors (KNN)
      Objective: Implement KNN for classifying data points.
      Code:
      from   sklearn.datasets import load_iris
      from   sklearn.model_selection import train_test_split
      from   sklearn.neighbors import KNeighborsClassifier
      from   sklearn.metrics import accuracy_score
   
      # Load dataset
      iris = load_iris()
      X_train, X_test, y_train, y_test = train_test_split(iris.data,
       iris.target, test_size=0.2)
   
      # KNN Classifier
      knn = KNeighborsClassifier(n_neighbors=3)
      knn.fit(X_train, y_train)
   
      # Predictions
      y_pred = knn.predict(X_test)
   
      # Accuracy
      print(f"Accuracy: {accuracy_score(y_test, y_pred)}")
      Task: Experiment with different values of k and evaluate the model’s performance.
3. Clustering: K-Means
      Objective: Learn clustering using K-Means.
      Code:
      from sklearn.cluster import KMeans
      from sklearn.datasets import make_blobs
      import matplotlib.pyplot as plt
   
      # Create synthetic data
      X, y = make_blobs(n_samples=300, centers=4, random_state=42)
   
      # Fit K-Means
      kmeans = KMeans(n_clusters=4)
      kmeans.fit(X)
   
      # Plot results
      plt.scatter(X[:, 0], X[:, 1], c=kmeans.labels_, cmap='viridis')
      plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,
       1], c='red', marker='X', s=200)
      plt.show()
      Task: Modify the code to work with a different number of clusters and evaluate the
       results.
4. Natural Language Processing (NLP): Text Classification with Naive Bayes
      Objective: Classify text into categories using Naive Bayes.
      Code:
      from   sklearn.feature_extraction.text import CountVectorizer
      from   sklearn.naive_bayes import MultinomialNB
      from   sklearn.model_selection import train_test_split
      from   sklearn.metrics import accuracy_score
   
      # Sample dataset
      texts = ["I love programming", "Python is amazing", "I hate bugs",
       "Debugging is fun"]
      labels = [1, 1, 0, 0]
   
     # Vectorize text
     vectorizer = CountVectorizer()
     X = vectorizer.fit_transform(texts)
  
     # Split data
     X_train, X_test, y_train, y_test = train_test_split(X, labels,
      test_size=0.25)
  
     # Train Naive Bayes
     model = MultinomialNB()
     model.fit(X_train, y_train)
  
     # Predict and evaluate
     y_pred = model.predict(X_test)
     print(f"Accuracy: {accuracy_score(y_test, y_pred)}")
     Task: Implement a more extensive dataset and experiment with different text
      classification algorithms.
5. Deep Learning: Neural Networks with TensorFlow (MNIST Dataset)
     Objective: Implement a neural network to classify handwritten digits.
     Code:
     import tensorflow as tf
     from tensorflow.keras import layers, models
     from tensorflow.keras.datasets import mnist
  
     # Load dataset
     (X_train, y_train), (X_test, y_test) = mnist.load_data()
  
     # Preprocess data
     X_train = X_train.reshape((X_train.shape[0], 28, 28,
      1)).astype('float32') / 255
     X_test = X_test.reshape((X_test.shape[0], 28, 28, 1)).astype('float32')
      / 255
  
     # Build model
     model = models.Sequential([
          layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,
      1)),
          layers.MaxPooling2D((2, 2)),
          layers.Flatten(),
          layers.Dense(128, activation='relu'),
          layers.Dense(10, activation='softmax')
     ])
  
     model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
      metrics=['accuracy'])
  
     # Train model
     model.fit(X_train, y_train, epochs=5)
  
     # Evaluate model
     test_loss, test_acc = model.evaluate(X_test, y_test)
     print(f"Test accuracy: {test_acc}")
     Task: Improve the model's accuracy by adding more layers and experimenting with
      different hyperparameters.
6. Reinforcement Learning: Q-Learning
     Objective: Implement Q-learning for a simple environment (like Gridworld).
     Code:
     import numpy as np
     import random
  
     # Define environment
     grid_size = 5
     actions = ['U', 'D', 'L', 'R']
     rewards = np.zeros((grid_size, grid_size))
     rewards[4, 4] = 10 # Goal state
  
     # Q-table
     q_table = np.zeros((grid_size, grid_size, len(actions)))
  
     # Learning parameters
     learning_rate = 0.1
     discount_factor = 0.9
     epsilon = 0.1
     episodes = 1000
  
     # Q-learning algorithm
     for _ in range(episodes):
         state = (0, 0) # Start at top-left corner
         while state != (4, 4):
             if random.uniform(0, 1) < epsilon:
                 action = random.choice(actions) # Explore
             else:
                 action = actions[np.argmax(q_table[state[0], state[1]])]              #
      Exploit
  
              # Take action and get next state
              if action == 'U':
                  next_state = (max(0, state[0] - 1), state[1])
              elif action == 'D':
                  next_state = (min(grid_size - 1, state[0] + 1), state[1])
              elif action == 'L':
                  next_state = (state[0], max(0, state[1] - 1))
              else: # 'R'
                  next_state = (state[0], min(grid_size - 1, state[1] + 1))
  
             # Update Q-value
             reward = rewards[next_state]
             q_table[state[0], state[1], actions.index(action)] = (1 -
      learning_rate) * q_table[state[0], state[1], actions.index(action)] +
      learning_rate * (reward + discount_factor *
      np.max(q_table[next_state[0], next_state[1]]))
  
              # Move to next state
              state = next_state
     Task: Expand the environment and train the agent in a more complex gridworld.
7. Transfer Learning with Pre-trained Models
     Objective: Use pre-trained models for image classification.
     Code:
     from tensorflow.keras.applications import VGG16
     from tensorflow.keras import layers, models
     from tensorflow.keras.preprocessing import image
     from tensorflow.keras.applications.vgg16 import preprocess_input
     import numpy as np
  
     # Load pre-trained VGG16 model
     base_model = VGG16(weights='imagenet', include_top=False,
      input_shape=(224, 224, 3))
  
     # Freeze the base model
     base_model.trainable = False
  
     # Add custom layers
     model = models.Sequential([
         base_model,
         layers.GlobalAveragePooling2D(),
         layers.Dense(1, activation='sigmoid')
     ])
  
     model.compile(optimizer='adam', loss='binary_crossentropy',
      metrics=['accuracy'])
  
     # Load and preprocess an image
     img = image.load_img('path_to_image.jpg', target_size=(224, 224))
     img_array = image.img_to_array(img)
     img_array = np.expand_dims(img_array, axis=0)
     img_array = preprocess_input(img_array)
  
     # Predict
     prediction = model.predict(img_array)
     print(prediction)
     Task: Fine-tune the model for a custom classification task using your own dataset.
8. Object Detection with OpenCV
      Objective: Use OpenCV to detect objects in images.
      Code:
      import cv2
   
      # Load pre-trained object detector
      detector = cv2.CascadeClassifier(cv2.data.haarcascades +
       'haarcascade_frontalface_default.xml')
   
      # Load image
      img = cv2.imread('path_to_image.jpg')
   
      # Convert to grayscale
      gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
   
      # Detect faces
      faces = detector.detectMultiScale(gray, scaleFactor=1.1,
       minNeighbors=5)
   
      # Draw rectangles around faces
      for (x, y, w, h) in faces:
          cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
   
      # Show the image
      cv2.imshow('Faces', img)
      cv2.waitKey(0)
      cv2.destroyAllWindows()
      Task: Experiment with other object detection models like YOLO.
9. Sentiment Analysis
      Objective: Perform sentiment analysis on movie reviews.
      Code:
      from   sklearn.feature_extraction.text import CountVectorizer
      from   sklearn.naive_bayes import MultinomialNB
      from   sklearn.model_selection import train_test_split
      from   sklearn.metrics import accuracy_score
   
      # Sample data (movie reviews and labels)
      reviews = ["I loved the movie", "I hated the movie", "It was okay",
       "Fantastic movie", "Boring plot"]
      labels = [1, 0, 1, 1, 0] # 1: Positive, 0: Negative
   
      # Vectorize text
      vectorizer = CountVectorizer()
     X = vectorizer.fit_transform(reviews)
  
     # Split data
     X_train, X_test, y_train, y_test = train_test_split(X, labels,
      test_size=0.25)
  
     # Train Naive Bayes
     model = MultinomialNB()
     model.fit(X_train, y_train)
  
     # Predict and evaluate
     y_pred = model.predict(X_test)
     print(f"Accuracy: {accuracy_score(y_test, y_pred)}")
     Task: Use a larger dataset for sentiment analysis (e.g., IMDb reviews).
10. Time Series Forecasting with ARIMA
     Objective: Use ARIMA for forecasting time series data.
     Code:
     import numpy as np
     import pandas as pd
     from statsmodels.tsa.arima.model import ARIMA
     import matplotlib.pyplot as plt
  
     # Generate synthetic time series data
     np.random.seed(42)
     data = np.random.randn(100) + 10
     time = pd.date_range(start='2025-01-01', periods=100, freq='D')
  
     # Create DataFrame
     df = pd.DataFrame(data, index=time, columns=['Value'])
  
     # Fit ARIMA model
     model = ARIMA(df['Value'], order=(5,1,0))
     model_fit = model.fit()
  
     # Forecast future values
     forecast = model_fit.forecast(steps=10)
  
     # Plot results
     plt.plot(df.index, df['Value'], label='Historical Data')
     plt.plot(pd.date_range(start=df.index[-1], periods=11, freq='D')[1:],
      forecast, label='Forecast', color='red')
     plt.legend()
     plt.show()
     Task: Forecast using a real-world dataset, such as stock prices or weather data.
These lab sheets provide hands-on experience in different areas of AI. Feel free to modify the
code or extend each task as you go!