0% found this document useful (0 votes)
33 views81 pages

21A91A04C3

The document is a summer internship report by M. Nithyasri, detailing an internship at Amazon Web Services (AWS) as part of the requirements for a Bachelor of Technology degree in Electronics and Communication Engineering. It outlines the hands-on experience gained in cloud computing, including work with key AWS services such as EC2, S3, and Lambda, as well as the skills developed in cloud architecture and resource management. The report also includes acknowledgments, a weekly overview of activities, and a comprehensive introduction to AWS and its significance in modern IT environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views81 pages

21A91A04C3

The document is a summer internship report by M. Nithyasri, detailing an internship at Amazon Web Services (AWS) as part of the requirements for a Bachelor of Technology degree in Electronics and Communication Engineering. It outlines the hands-on experience gained in cloud computing, including work with key AWS services such as EC2, S3, and Lambda, as well as the skills developed in cloud architecture and resource management. The report also includes acknowledgments, a weekly overview of activities, and a comprehensive introduction to AWS and its significance in modern IT environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 81

AWS CLOUD INTERNSHIP

A Summer Internship Report submitted in partial fulfillment of the


requirements for the award of degree of

BACHELOR OF TECHNOLOGY
In
ELECTRONICS AND COMMUNICATION ENGINEERING

Submitted
by
M.NITHYASRI
22A91A0441

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING
ADITYA UNIVERSITY
(Formerly Aditya Engineering College

(A)) 2024-2025
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING

CERTIFICATE

This is to certify that the internship report entitled “AWS CLOUD” is being submitted by
M.NITHYASRI (22A91A0441) in partial fulfillment of the requirements for award of the B.Tech., degree
in Electronics and Communication Engineering for the academic year 2024-2025.

Internship Coordinator Head of the Department


Name, Qualification: Ch. V. Kiranmayi, M.Tech Dr. Sanjeev Kumar
Designation: Assistant Professor Associate Professor
Department of ECE Department of ECE
DECLARATION

I hereby declare that the internship report entitled “AWS CLOUD” is a genuine report. This work has
been submitted to the ADITYA UNIVERSITY, Surampalem, in partial fulfillment of the B.Tech., degree.
I further declare that this report has not been submitted in full or part of the award of any degree of this or
any other educational institutions.

by
M.NITHYASRI
(22A91A0441)
INTERNSHIP COMPLETION CERTIFICATE
ACKNOWLEDGEMENT

First, I would like to thank the Director of Technical Hub, Surampalem for giving me the
opportunity to do an internship within the organization. I also would like all the people that worked along
with me in Technical Hub, Surampalem with their patience and openness they created an enjoyable
working environment.

It is with immense pleasure that we would like to express our indebted gratitude to our internship
coordinator Ch. V. Kiranmayi, Assistant Professor who has guided us a lot and encouraged us in every
step of the intern project work, her valuable moral support and guidance throughout the Intern project
helped us to a greater extent.

I am grateful to Dr. Sanjeev Kumar, Associate Professor and HOD for inspiring us all the way and for
arranging all the facilities and resources needed for my intern project work.

I wish to thank our Dr. M.V. Rajesh, Associate Dean and Dr. Dola Sanjay, Dean School of Engineering
for their encouragement and support during the course of my intern project work.

I would like to extend my sincere thanks to Dr. G. Suresh, Registrar, Dr. S. Rama Sree, Pro Vice-
Chancellor, Dr. M.B. Srinivas, Vice-Chancellor, Dr. M. Sreenivasa Reddy, Deputy Pro Chancellor
and Management, Aditya University for unconditional support for providing me the best infrastructural
facilities and state of the art laboratories during my intern project work.

Not to forget, Faculty, Lab Technicians, Non-Teaching Staff and our Friends who have directly or
helped and supported us in completing my intern project work in time.
ABSTARCT

This report summarizes my internship at Amazon Web Services (AWS), where I gained hands-on
experience in cloud computing by working with key AWS services such as EC2, S3, Lambda,
RDS, CloudFormation, and IAM. During the internship, I assisted with configuring and optimizing
cloud resources, automating infrastructure deployments, monitoring system performance, and
implementing security best practices. I also contributed to troubleshooting scalability and
availability issues while working with tools like AWS CLI, Terraform, and CloudWatch to improve
operational efficiency.

Through this experience, I developed a deeper understanding of cloud architecture, resource


optimization, and cost management. I collaborated with cross-functional teams to design and
maintain AWS-based solutions, enhancing my problem-solving skills in a fast-paced environment.
The internship strengthened my technical expertise and prepared me for a career in cloud
computing, DevOps, and cloud architecture, emphasizing the importance of security, scalability,
and efficient resource management in cloud-based solutions and cloud computing. .
Learning Objectives/Internship Objectives

 Internships are generally thought of to be reserved for college students looking to


gain experience in a particular field. However, a wide array of people can benefit
from Training Internships in order to receive real world experience and develop
their skills.
 An objective for this position should emphasize the skills you already possess in
the area and your interest in learning more

 Internships are utilized in a number of different career fields, including


architecture, engineering, healthcare, economics, advertising and many more.

 Some internships are used to allow individuals toperform scientific research while
others are specifically designed to allow people to gain first-hand experience
working.

 Utilizing internships is a great way to build your resume and develop skills that
can be emphasized in your resume for future jobs. When you are applying for a
Training Internship, make sure to highlight any special skills or talents that can
make you stand apart from the rest of the applicants so that you have an improved
chance of landing the position.
WEEKLY OVERVIEW OF INTERNSHIP ACTIVITIES

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED

On-boarding and introduction to company


03-06- Monday
2024
04-06- Tuesday Holiday
2024
1st WEEK

05-06- Wednesday Introduction to GitHub and Version Control System


2024
06-06- Thursday Introduction to Operating Systems
2024
07-06- Friday Working with Different Operating Systems
2024
08-06- Saturday Introduction to Client-Server Architecture
2024

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED

10-06- Monday Different types of Servers


2024
11-06- Tuesday Introduction to Networking
2024
2nd WEEK

12-06- Wednesday Datacenters and Servers


2024
13-06- Thursday Activity on infrastructure connectivity
2024
14-06- Friday Introduction to Cloud Infrastructure
2024
15-06- Saturday Cloud Computing Models
2024
DATE DAY NAME OF THE TOPIC/MODULE COMPLETED

17-06- Monday Holiday


2024

18-06- Tuesday Cloud Services


2024
3rd WEEK

19-06- Wednesday Introduction To Virtualization


2024

20-06- Thursday Virtual Servers of Linux


2024

21-06- Friday Activity on Cloud and Virtualization


2024

22-06- Saturday Introduction to Linux OS


2024

DATE DAY NAME OF THE TOPIC/MODULE COMPLETED


24-06- Monday Linux command syntax and basic commands
4 WEEK

2024
25-06- Tuesday Linux User and Groups
2024
th

26-06- Wednesday Basic file and directory permission in Linux


2024
27-06- Thursday Activity on Linux Operating System
2024
28-06- Friday Introduction to AWS Services and Service Categories
2024
29-06- Saturday Understanding AWS Management Console
2024
DATE DAY NAME OF THE TOPIC/MODULE
COMPLETED
01-07-2024 Monday AWS Regions and Availability Zones
02-07-2024 Tuesday Introduction to AWS Compute Services
5thWEEK

03-07-2024 Wednesday Working with EC2 Service


04-07-2024 Thursday Activity on AWS Management Console and EC2
Service
05-07-2024 Friday Web application deployment on Windows Server
using
06-07-2024 Saturday EC2Web application deployment on Linux Server
using

DATE DAY NAME OF THE TOPIC/MODULE


COMPLETED
08-07-2024 Monday EC2Managing options for EC2 instance
09-07-2024 Tuesday Ways of connecting to Linux EC2 instances using
SSH. Sharing data between local and cloud EC2
6thWEEK

instances
10-07-2024 Wednesday Activity on web application deployment using EC2
compute service
11-07-2024 Thursday Introduction to Storage technologies
12-07-2024 Friday Block vs Object Storage services
13-07-2024 Saturday Working with AWS S3
DATE DAY NAME OF THE TOPIC/MODULE
COMPLETED
15-07-2024 Monday Volumes and Snapshots using AWS Elastic Block
Storage
7thWEEK

16-07-2024 Tuesday Working with EBS


17-07-2024 Wednesday Holiday
18-07-2024 Thursday Introduction to AWS Virtual Private Cloud and its
components
19-07-2024 Friday IPv4 Addressing and Subnetting
20-07-2024 Saturday VPC Peering

DATE DAY NAME OF THE TOPIC/MODULE


COMPLETED
22-07-2024 Monday Setting up Elastic Load Balancer to EC2 Instances
8thWEEK

23-07-2024 Tuesday Creation of Auto Scaling Instances


24-07-2024 Wednesday AI & ML Introduction
25-07-2024 Thursday AWS AI Services Overview & API’s
26-07-2024 Friday Project Deployment
27-07-2024 Saturday Project Deployment
INDEX

S.No Contents Page


Introduction to AI 1
1. Module -1 2-4
1.1. Python Fundamentals with advanced concepts
and mathematical foundations:
2. Module -2
2.1 Data Understanding & Big Data for AI 5
2.2 Importance of Data Understanding in AI 5
2.3 Big Data: The fuel for AI Innovations 6
2.4 Big Data is characterized by the 3 V 6
2.5 Five layers of sequencer 7
3. Module -3 8
3.1 AI Vision, Classification & Neural Networks 9
4. Module -4
4.1 Reinforcement Learning & AI Problem Solving 11
4.2 Application of Reinforcement Learning in AI
Problem Solving 12-13
5. Module -5
5.1 Features of Python 14
5.2 Environment 15
5.3 Python Applications 15
5.4 Data types in python 15-16
5.5 Operations in Python 17-19
5.6 Advanced Datatypes 20-23
5.7 List Implementation 24-26
5.8 Tuple 26-27
5.9 Sets 27-29
6. Annexure 30-43
(Traffic sign Recognition project)
7. Conclusion 44
8. Executive Summary 45

9. About the Company 46


10. Opportunities 47
11. Training 48-49

12. Challenges Faced 50


List of Figures

s.no Fig number Fig name Page.No

1 1.1 plt.imshow(X_tarin[0]) 33

2 1.2 plt.imshow(X_test[0]) 33

3 1.3 Speed limit (80km/h) 43

4 1.4 Speed limit (70km/h) 34

5 1.5 No passing for vehicles over 3.5 metric tons 34

6 1.6 Road Work 34

7 1.7 Turn left Ahead 35

8 1.8 Keep Right 35

9 1.9 Flaggers in road ahead warning 35

10 2.0 Traffic Signal Ahead 35

11 2.1 plt.imshow(X_valid[0]) 36

12 2.2 Train Labels 37

13 2.3 Valid Labels 37

14 2.4 Test Labels 37

15 2.5 Original 38

16 2.6 Scaled 38

17 2.7 Translation 38

18 2.8 Rotation 38

19 2.9 Labels of the Train and Augmentation 39

20 3.0 Epoch Value 43


INTRODUCTION

Amazon Web Services (AWS) is a comprehensive and widely adopted cloud platform, offering a vast
range of services that enable organizations to move, manage, and scale their workloads in the cloud. As one of the
earliest pioneers of cloud computing, AWS has grown to become the most extensive and widely used cloud
infrastructure in the world, powering millions of customers across various sectors, including startups, enterprises,
government agencies, and non-profit organizations. The platform delivers on-demand computing resources—such
as storage, databases, networking, and machine learning—over the internet, eliminating the need for businesses to
invest in and manage physical hardware and data centers.

One of the key advantages of AWS is its scalability and flexibility. Organizations can quickly scale their
infrastructure up or down based on demand, reducing costs and improving efficiency. AWS operates on a pay-as-
you-go pricing model, where users only pay for the services they consume, which helps organizations avoid the
high upfront costs of traditional IT infrastructure. Among its many services, AWS offers compute power through
Amazon EC2 (Elastic Compute Cloud), object storage through Amazon S3 (Simple Storage Service), data
management via Amazon RDS (Relational Database Service), and serverless computing with AWS Lambda. These
services, along with tools for networking, analytics, security, and application development, provide a
comprehensive environment for building, deploying, and managing cloud-based applications.

In addition to its technical offerings, AWS provides robust security features, such as encryption, identity
and access management (IAM), and compliance with a wide range of global standards and regulations. AWS
ensures that data and applications are protected with high availability, fault tolerance, and disaster recovery
capabilities, making it a reliable choice for businesses that require secure, mission-critical applications. The
platform also supports DevOps practices by enabling automation and continuous integration and delivery (CI/CD),
making it easier to build and deploy applications in a consistent, repeatable manner.

1
Given the rapid growth of cloud adoption across industries, acquiring hands-on experience with AWS is
an essential skill for anyone pursuing a career in cloud computing, software engineering, DevOps, or cloud
architecture. This report details my internship experience at AWS, where I gained practical experience with key
cloud services and developed a deeper understanding of cloud infrastructure management. Throughout my
internship, I worked on various projects that involved deploying, optimizing, and securing AWS resources. This
report will provide insights into the specific AWS services I worked with, the challenges I faced, the technical skills
I developed, and the overall impact this experience had on my understanding of cloud computing and AWS's role in
modern IT environments.

In summary, AWS is a critical enabler of cloud innovation, helping businesses scale more efficiently,
reduce operational costs, and enhance their agility in today's fast-paced digital world. Through my internship, I was
able to gain firsthand knowledge of how AWS supports organizations in driving digital transformation, and I have
developed the skills and expertise necessary to navigate the cloud ecosystem with confidence. The hands-on
experience with AWS has provided me with a solid foundation in cloud computing, equipping me with the tools and
insights to contribute effectively to cloud-based projects in the future.

1
MODULE -1

1.1 AWS Cloud Fundamentals :

Amazon Web Services (AWS) is a comprehensive and widely adopted cloud platform that provides a broad
range of infrastructure services, including computing power, storage solutions, and networking capabilities. During
my internship, the primary objective was to grasp the fundamental services and tools offered by AWS, understand
their role in building scalable and resilient cloud architectures, and explore how businesses leverage these services
to address their technological requirements.

This report outlines the AWS fundamentals learned during the internship, emphasizing key services,
practical applications, and hands-on experiences. AWS empowers businesses to operate applications and store data
without the need to maintain on-premise hardware, offering cloud-based solutions that are scalable and reliable.

A key advantage of AWS is its pay-as-you-go pricing model, which ensures users only pay for the resources
they consume. This model enhances scalability and flexibility, making it ideal for businesses with varying
workloads. The AWS global infrastructure is structured around geographic regions and availability zones (AZs),
which consist of multiple data centers. This design ensures high availability, fault tolerance, and low latency for
applications.

AWS provides cloud-based solutions that allow businesses to run applications and store data without needing
to maintain on-premise hardware. AWS operates on a pay-as-you-go pricing model, where users only pay for the
resources they use, which provides scalability and flexibility. The AWS global infrastructure consists of data centers
organized into geographic regions and availability zones (AZs), ensuring high availability and fault tolerance .

2
1.2 Overview Of AWS Cloud Operations :

AWS enables organizations to transition from traditional, on-premise hardware to fully cloud-based
environments. This shift provides the following key benefits:

1. Flexibility: Businesses can dynamically adjust resource allocation based on current workloads, ensuring that
they only use what they need. For instance, applications can scale up during peak usage periods (e.g., sales
events) and scale down during quieter times, reducing unnecessary expenditure.
2. Reduced Capital Expenditure: By leveraging AWS’s infrastructure, businesses eliminate the need for
costly upfront investments in physical servers, storage, and networking hardware. This is particularly
beneficial for startups and small businesses, which can invest their resources in innovation rather than
infrastructure.
3. Remote Access: Cloud-based operations enable teams to access resources and applications from anywhere,
ensuring seamless operations even for distributed or remote workforces.
4. Accelerated Deployment: AWS's pre-configured services and tools allow businesses to quickly deploy
applications without worrying about hardware setup or compatibility issues.
5. Eco-Friendly Operations: AWS data centers are optimized for energy efficiency, helping organizations
reduce their carbon footprint compared to maintaining on-premise systems.

Pay-As-You-Go Model

AWS's pay-as-you-go pricing structure revolutionizes the way businesses approach budgeting for IT
resources. Key aspects include:

1. Cost Efficiency: Businesses pay only for the resources they use, such as computing power, storage, or data
transfer, on an hourly or per-second basis (depending on the service).
2. Predictable Billing: Tools like AWS Cost Explorer and Billing Dashboard provide detailed insights into
resource consumption, allowing businesses to forecast and manage expenses effectively.
3. Scaling Without Overspending: The model supports elasticity, meaning businesses can expand their
operations during high demand and shrink them back when demand decreases—without being locked into
long-term commitments.
4. Reserved and Spot Instances: AWS offers additional cost-saving mechanisms, such as:
o Reserved Instances: Lower pricing for customers who commit to using specific resources for 1 or 3 years.
o Spot Instances: Heavily discounted rates for spare compute capacity, ideal for flexible and non-time-sensitive
workloads.

1.3 Global Infrastructure :


AWS's global infrastructure ensures high availability, reliability, and low latency for its services. Key
components include:

1. Regions:
AWS is divided into multiple Regions, which are geographically separate areas around the globe (e.g.,
North America, Europe, Asia-Pacific). Each region operates independently to offer localized services while
maintaining compliance with specific regional regulations.
2. Availability Zones (AZs):
Each region consists of multiple Availability Zones, which are clusters of data centers. These AZs are
physically separated to prevent failures in one AZ from affecting another. Businesses can distribute their
applications across multiple AZs to achieve high availability and fault tolerance.
3. Edge Locations:
AWS also employs edge locations as part of its Amazon CloudFront content delivery network (CDN).
These locations cache data closer to end-users, reducing latency and improving user experience.
4. Disaster Recovery and Fault Tolerance:
o Multi-Region Deployment: Businesses can replicate their applications across regions to ensure
uninterrupted service in case of outages.
o Automated Backups and Replication: Services like AWS Backup and Cross-Region Replication
allow businesses to safeguard data and recover quickly from disasters.
5. Security:
AWS infrastructure is designed with security in mind, offering:
o End-to-end encryption for data in transit and at rest.
o Physical security for data centers.
o Compliance with global standards such as GDPR, ISO, and HIPAA.
MODULE-2

2. Key AWS Services and their Applications :


2.1 Compute Services

1. Amazon EC2 (Elastic Compute Cloud)


EC2 provides resizable virtual machines in the cloud, enabling businesses to deploy and manage
applications flexibly.
o Internship Applications:
 Deployed web servers on EC2 instances and configured them to handle traffic variations
using Auto Scaling and Elastic Load Balancers (ELB).
 Optimized instance types based on workloads, selecting from general-purpose, compute-
optimized, or memory-optimized instances.
o Key Features:
 Customizable AMIs (Amazon Machine Images).
 On-demand, reserved, and spot instance pricing models.
 Integration with EBS for persistent storage.

2. AWS Lambda
Lambda enables serverless computing, where code runs in response to events without needing server
management.
o Internship Applications:
 Automated workflows by writing Lambda functions to process incoming data streams via

Amazon Kinesis.
 Integrated Lambda with Amazon S3 to trigger events when files were uploaded, such as
generating metadata or running data transformations.
o Key Features:
 Supports multiple programming languages.
 Pay only for the execution time of functions.
 Scales automatically to handle high traffic.

3
4
2.2 Storage Services

1. Amazon S3 (Simple Storage Service)


S3 provides scalable, high-durability object storage for a wide variety of data types.
o Internship Applications:
 Configured S3 buckets to store and organize files.
 Implemented lifecycle policies for automated data tiering, such as transitioning less-
frequently accessed data to S3 Glacier.
 Utilized bucket versioning to maintain data backups and enable recovery of previous file
versions.
o Key Features:
 Built-in redundancy across multiple AZs.
 Supports encryption for data security.
 Direct integration with services like Lambda and CloudFront.

2. Amazon EBS (Elastic Block Store)


EBS offers block storage for use with EC2 instances, ideal for applications requiring high throughput or
transaction-intensive workloads.
o Internship Applications:
 Configured EBS volumes for EC2 instances, ensuring data persistence even when instances
were stopped or restarted.
 Snapshotted EBS volumes for backups and disaster recovery.
o Key Features:
 Provides SSD and HDD options for varying performance needs.
 Snapshots can be easily shared across regions.

2.3 Networking Services

1. Amazon VPC (Virtual Private Cloud)


VPC lets users define isolated cloud environments for securely deploying applications.
o Internship Applications:

5
 Created VPCs with public and private subnets to host web servers and backend databases
securely.
 Configured route tables and internet gateways for controlled traffic flow.
 Set up security groups and network access control lists (NACLs) to restrict access based
on IPs and ports.
o Key Features:
 Customizable CIDR block ranges for IP address management.
 Integration with AWS Direct Connect for hybrid cloud setups.

2.4 Database Services

1. Amazon RDS
2. RDS simplifies the setup and management of relational databases like MySQL, PostgreSQL, and SQL
Server.
o Internship Applications:
 Deployed a MySQL database instance to support a web application backend.
 Configured automated backups and read replicas to ensure high availability.
o Key Features:
 Automated patching and backups.
 Supports multi-AZ deployments for disaster recovery.
 Provides performance insights for optimization.

3. Amazon DynamoDB
DynamoDB is a fully managed NoSQL database designed for high-performance, low-latency applications.
o Internship Applications:
 Created a DynamoDB table to store real-time data from IoT sensors, ensuring consistent
performance during high data influx.
 Integrated with DynamoDB Streams to trigger event-driven workflows via Lambda.
o Key Features:
 Automatically scales throughput.
 Provides in-memory caching with DAX (DynamoDB Accelerator).

6
7
2.5 Monitoring and Management Services

1. AWSCloudWatch
CloudWatch monitors AWS resources and applications, offering real-time metrics and customizable
dashboards.
o Internship Applications:
 Configured alarms to notify administrators of abnormal resource utilization (e.g., high CPU
usage on EC2 instances).
 Analyzed logs for troubleshooting and identifying performance bottlenecks.
o Key Features:
 Integrated with all AWS services for comprehensive monitoring.
 Supports custom metrics and log aggregation.

2. IAM
IAM manages access to AWS resources securely by defining roles, policies, and permissions.
o Internship Applications:
 Configured user groups with appropriate permissions for developers and administrators.
 Implemented least-privilege access policies to minimize security risks.
o Key Features:
 Supports multi-factor authentication (MFA).
 Centralized access control for AWS accounts.

5
MODULE – 3
Benefits of Cloud Services :

1. Compute Services

These services provide scalable and cost-efficient computing power.

Examples:

 Amazon EC2 (Elastic Compute Cloud): Virtual servers for running applications.
 AWS Lambda: Serverless computing to run code on demand.
 Amazon ECS (Elastic Container Service) & EKS (Elastic Kubernetes Service): Container orchestration
for microservices.

Benefits:

 Scalable and on-demand computing resources.


 Cost-efficient pay-per-use pricing.
 Easy integration with storage, networking, and other AWS services.
 Simplified serverless architecture with Lambda.

2. Storage Services

AWS provides multiple storage solutions for diverse data needs.

Examples:

 Amazon S3 (Simple Storage Service): Object storage for unstructured data.


 Amazon EBS (Elastic Block Store): Persistent block storage for EC2 instances.
 Amazon Glacier: Low-cost archival storage.

Benefits:

 Highly durable and available storage solutions (e.g., 99.999999999% durability in S3).
 Secure storage with encryption options.

6
 Cost optimization through lifecycle policies.
 Seamless integration with compute and analytics services.

3. Database Services

AWS offers managed relational and NoSQL database solutions.

Examples:

 Amazon RDS (Relational Database Service): Managed relational databases like MySQL, PostgreSQL, and
SQL Server.
 Amazon DynamoDB: A fully managed NoSQL database.
 Amazon Redshift: A data warehousing service for analytics.

Benefits:

 Automated database management (e.g., backups, updates, scaling).


 High performance and availability with multi-AZ deployments.
 Scalability to handle large data volumes.
 Secure and compliant database solutions.

4. Networking and Content Delivery

These services ensure secure, high-performance communication across resources.

Examples:

 Amazon VPC (Virtual Private Cloud): Customizable cloud networks.


 Amazon CloudFront: A content delivery network (CDN) for low-latency distribution.
 AWS Direct Connect: Dedicated network connections between on-premise and AWS.

Benefits:

 Secure and isolated cloud environments with VPC.


 Reduced latency through edge locations with CloudFront.
6
 Easy setup of hybrid cloud architectures.
 Flexible and scalable network configurations.

5. Machine Learning and Artificial Intelligence

AWS offers pre-built AI models and tools for custom ML solutions.

Examples:

 Amazon SageMaker: Build, train, and deploy machine learning models.


 Amazon Rekognition: Image and video analysis.
 Amazon Polly: Text-to-speech service.

Benefits:

 Democratized access to advanced AI and ML tools.


 Faster model development and deployment with SageMaker.
 Pre-trained models reduce time-to-market.
 Scalable infrastructure for deep learning and big data analytics.

6. Developer Tools

These services support software development, deployment, and monitoring.

Examples:

 AWS CodePipeline: Automates the software release process.


 AWS CodeBuild: Compiles code, runs tests, and produces deployable artifacts.
 AWS CodeDeploy: Automates code deployment to instances or services.

Benefits:

6
 Continuous integration and delivery (CI/CD) pipelines.
 Automation of build and deployment processes.
 Enhanced collaboration between development and operations teams.

7. Security, Identity, and Compliance

AWS offers services to protect data and manage access securely.

Examples:

 AWS IAM (Identity and Access Management): Role-based access control.


 AWS Shield: Protection against DDoS attacks.
 AWS WAF (Web Application Firewall): Protects web applications from common threats.

Benefits:

 Secure access management with granular permissions.


 Built-in protection against common vulnerabilities.
 Compliance with global security standards (e.g., GDPR, HIPAA).

8. Analytics

AWS provides tools for data processing, analytics, and visualization.

Examples:

 Amazon EMR (Elastic MapReduce): Big data processing with Apache Hadoop.
 Amazon Kinesis: Real-time data streaming and analytics.
 Amazon Athena: Serverless SQL queries on data stored in S3.

Benefits:

 Easy handling of large-scale data processing.


 Real-time insights with Kinesis.
 Cost-effective and serverless analytics tools.
 Integration with visualization tools like Amazon QuickSight.
6
 9. Internet of Things (IoT)

AWS IoT services enable the connection, monitoring, and management of IoT devices.

Examples:

 AWS IoT Core: Securely connects IoT devices to the cloud.


 AWS IoT Greengrass: Brings cloud capabilities to edge devices.

Benefits:

 Scalable IoT device management.


 Edge computing for real-time responses.
 Secure device communication.

10. Migration and Transfer

AWS supports seamless migration of on-premise workloads to the cloud.

Examples:

 AWS Migration Hub: Tracks the progress of migrations.


 AWS Snowball: Transfers large data volumes securely and efficiently.
 AWS Database Migration Service (DMS): Migrates databases to AWS.

Benefits:

 Streamlined migration processes.


 Tools for database schema conversions.
 Secure, efficient data transfer options.

Benefits of AWS Cloud Services

1. Cost Efficiency: The pay-as-you-go model reduces operational costs.


2. Scalability: Resources can be scaled up or down based on demand.
3. Flexibility: A wide array of services supports different workloads and architectures.
6
4. Global Reach: Regions and edge locations ensure low latency worldwide.
5. Security: Built-in security features and compliance certifications.
6. Innovation: Advanced AI/ML and IoT tools enable cutting-edge solutions.

6
The first layer is the Embedding Layer, which transforms input tokens (like words or characters)
into dense, fixed-size vectors that capture semantic relationships, creating a meaningful numerical
representation in a continuous vector space.

Following this, the Recurrent Layer (such as RNN, LSTM, or GRU) processes the data
sequentially, maintaining a hidden state that updates with each input, making it highly effective for tasks
where the context of prior elements in the sequence matters, like sentences or time-based data.

Next comes the Attention Layer, which calculates attention weights to focus on important parts of
the sequence relative to each other, allowing the model to identify which elements in the sequence are most
relevant at any given point. This layer is especially vital in transformer models and encoder-decoder
architectures, where it greatly enhances the model's ability to capture complex dependencies within the
data.

The Feedforward Layer (or Dense Layer) then applies transformations to the features extracted
from previous layers. In transformer models, this layer is applied after the self-attention mechanism to
further process and refine the features.

Finally, the Output Layer produces the final prediction or classification probabilities. For
classification tasks, this layer often includes a soft max activation function, while in regression or sequence
generation tasks, it may have a different structure or activation function. Together, these five layers enable
sequencer models to effectively handle sequential data, each layer building on the previous one to
progressively capture, transform, and output information.

7
MODULE-3

3.1 AI Vision, Classification & Neural Networks:


During this stage, the internship emphasized AI's role in computer vision, including the
classification and retrieval of images. A significant focus was placed on understanding Convolutional
Neural Networks (CNNs) and their application in tasks like image recognition. Additionally, the module
introduced neural networks, explaining their structure, activation functions, and practical implementation
in various real-world AI scenarios, such as pattern recognition and language processing.

Artificial Intelligence (AI) has significantly advanced the field of computer vision, enabling
machines to interpret and process visual information from the world in ways similar to human vision. AI
vision, also known as computer vision, refers to the ability of computers to understand and analyze digital
images and videos to perform tasks such as object recognition, image classification, segmentation, and
tracking. This capability is crucial for many real-world applications, including autonomous vehicles, facial
recognition, medical imaging, and industrial automation. AI vision systems rely on large datasets of
labeled images and videos to train models capable of identifying patterns, detecting objects, and making
decisions based on visual inputs.

AI vision, image classification is one of the most fundamental tasks. Image classification involves
assigning a label or category to an image based on its content. For example, an image classification model
might be trained to identify whether an image contains a cat, a dog, or a car. The process of image
classification typically begins with data preprocessing, where raw images are prepared for analysis by
converting them into a numerical format that a machine learning model can understand. Common
preprocessing techniques include image resizing, normalization, and augmentation (such as rotating or
flipping images) to increase the diversity of training data. Following preprocessing, feature extraction is
performed to identify the distinctive patterns in the image that will be used for classification.

In traditional machine learning methods, this feature extraction was manual, requiring domain
expertise. However, with the advent of deep learning, feature extraction has become automatic through the
use of neural networks. Neural networks, particularly Convolutional Neural Networks (CNNs), have
revolutionized the field of image classification. CNNs are a class of deep learning models specifically
designed for processing grid-like data such as images

8
The advantage of CNNs is their ability to automatically learn which features are most relevant for a
specific task without requiring manual intervention. CNNs consist of several layers, including
convolutional layers, pooling layers (which down-sample the data to reduce its dimensionality), and fully
connected layers (which aggregate the learned features for final classification). The use of CNNs has made
tasks like object detection and facial recognition highly accurate and efficient.

In addition to image classification, neural networks are used for more complex tasks in AI vision,
such as object detection and image segmentation. Object detection goes beyond classifying an image by
identifying and localizing multiple objects within an image. This is particularly useful in autonomous
driving, where the system must recognize various objects, including pedestrians, traffic signs, and other
vehicles, while simultaneously determining their location within the scene. CNN-based models like YOLO
(You Only Look Once) and RCNN (Region-based Convolutional Neural Networks) have been
instrumental in achieving real-time object detection.

Similarly, image segmentation tasks involve classifying each pixel in an image, dividing it into
meaningful segments, such as distinguishing between the background and the foreground of an image. This
pixel-level understanding of images is critical in fields like medical imaging, where precise segmentation
of organs or tissues is required.

In conclusion, the intersection of AI vision, classification, and neural networks has opened up a
world of possibilities for automating visual recognition tasks across various industries. From improving
image classification accuracy with CNNs to creating synthetic images with GANs, these technologies
continue to push the boundaries of what is possible in computer vision.

This is the area of AI that allows computers to understand and process visual inputs such as images
or videos. Computer vision encompasses tasks like image recognition, object detection, segmentation, and
scene understanding. With advances in deep learning, AI vision has improved significantly, enabling
applications like facial recognition, autonomous vehicles, medical imaging, and augmented reality.

Image classification is a fundamental task in computer vision where the system assigns a label to an
image based on its content. For example, in a dataset of animals, the model might classify images as "cat,"
"dog," or "bird." Classification can also go beyond identifying general categories to more specific tasks,
like diagnosing medical conditions from X-ray images or detecting types of defects in manufacturing.
9
Convolutional Neural Networks (CNNs) are particularly popular in vision tasks because they are
designed to process grid-like data, such as images. CNNs can learn to detect patterns through layers of
filters that capture edges, textures, shapes, and higher-order features. These networks are trained on large
datasets to generalize and recognize similar patterns in new images. More complex architectures like
Renat, Inception, and VGG have further improved image classification accuracy and efficiency, while
newer models, such as Vision Transformers (VIT), apply transformer-based architectures to vision tasks,
achieving state-of-the-art performance in various computer vision benchmarks.

AI Vision, or computer vision, aims to replicate human sight and interpretation capabilities,
allowing machines to gather insights from visual data. This process goes beyond just identifying objects; it
includes understanding context, relationships between objects, and even complex scenarios, such as
detecting emotions or understanding interactions in a scene. One of the foundational techniques in AI
vision is image preprocessing, which involves preparing images for analysis by improving quality,
reducing noise, and standardizing image sizes.

10
MODULE -4
4.1 Reinforcement Learning & AI Problem Solving:

This segment introduced reinforcement learning, which involves training AI models to make
decisions based on interactions with their environment. Topics covered included Markov Decision
Processes (MDPs), Q-learning, and policy gradients. The module also explored problem-solving
techniques using AI, including uninformed search methods like BFS and DFS, informed search algorithms
like A*, and constraint satisfaction problems (CSPs). These concepts are foundational in developing
intelligent systems capable of solving complex problems autonomously.

Reinforcement Learning (RL) is a crucial paradigm in artificial intelligence (AI) that enables
machines to learn by interacting with their environment. Unlike traditional supervised learning, where
models learn from labelled data, RL involves an agent that learns to make decisions through trial and error
by receiving rewards or penalties based on the actions it takes. The goal of RL is to develop a policy—a
strategy that tells the agent the best action to take in a given state to maximize the cumulative reward over
time. This approach has been instrumental in solving complex decision-making problems in various fields
such as robotics, gaming, finance, healthcare, and autonomous systems.

In RL, the agent operates within an environment and follows a cyclical process: it perceives the
state of the environment, selects an action, and then receives feedback in the form of a reward. This reward
serves as the signal that the agent uses to learn and improve its behavior over time. A key concept in RL is
the "exploration exploitation trade-off," where the agent must balance exploring new strategies to discover
better rewards versus exploiting known strategies that have yielded high rewards in the past. Over time, the
agent learns a policy that optimizes long-term rewards. Techniques like Q-learning, Deep Q-Networks
(DQN), and Policy Gradient methods have been developed to help agents efficiently learn in complex
environments.

The real strength of RL comes from its ability to solve sequential decision-making problems, where
the outcome of one decision impacts the next. For example, in gaming, an RL agent learns how to navigate
a series of moves to win, considering the consequences of each action on future rewards. Reinforcement
Learning (RL) is a type of machine learning where an agent learns to make decisions by interacting with an
environment. Unlike supervised learning, where models learn from labeled data, RL agents learn through
trial and error by receiving rewards or penalties for their actions, which gradually guides them toward
11
achieving a specified goal.

12
4.2 Applications of Reinforcement Learning in AI Problem Solving

The applicability of RL in AI problem-solving is vast, and it has contributed to advancements


across multiple industries. One of the most famous applications is in the field of game playing, notably
with AlphaGo and AlphaZero, where RL was key to developing AI systems that surpassed human-level
performance in games like Go and Chess. These systems use RL to continuously improve their strategy,
analyzing millions of potential moves and learning from the outcomes of simulated games. This level of AI
problem-solving involves optimizing performance by learning from complex environments with a large
number of variables, a task that traditional algorithms struggle to accomplish effectively.

Another domain where RL has shown great promise is in autonomous vehicles. The decision-
making process in self-driving cars involves navigating traffic, avoiding obstacles, and adhering to road
rules while minimizing the risk of accidents. RL provides a framework for these systems to learn optimal
driving policies by interacting with virtual or real environments, gradually improving through feedback
and real-world data. RL allows these systems to adapt to highly dynamic, uncertain environments, ensuring
safer and more efficient driving decisions.

In finance, RL is applied to trading algorithms, where the goal is to maximize long-term profit.
Traders face a complex environment with fluctuating market conditions, and RL enables systems to learn
strategies for buying and selling assets by evaluating the outcomes of their actions in various market states.
The system constantly refines its policy to balance risk and return. This approach is also useful in portfolio
management, where RL helps in learning optimal asset allocation strategies over time. Despite its
remarkable success, RL faces several challenges. One of the most significant is the issue of sample
inefficiency. RL often requires a large number of interactions with the environment to learn an optimal
policy, which is particularly problematic in environments where obtaining real-world data is expensive or
risky (e.g., healthcare or autonomous driving).

In conclusion, Reinforcement Learning represents a powerful approach to AI problem solving. Its


ability to learn from interactions and make decisions based on feedback makes it an ideal framework for
addressing complex, real-world problems. Although challenges remain in terms of sample efficiency,
generalization, and real-world deployment, advancements in RL hold the potential to revolutionize
industries from robotics and autonomous vehicles to healthcare and finance.

13
As RL continues to evolve, it will undoubtedly unlock new possibilities in AI driven solutions, pushing the
boundaries of what intelligent systems can achieve.

Reinforcement Learning (RL) is widely applied in AI to solve complex, dynamic problems where
decision-making unfolds over time, making it essential in areas requiring adaptive, sequential strategies. In
gaming, RL has achieved remarkable results, with AI agents mastering games such as Go, Dota 2, and
Chess, often surpassing human expertise by developing novel strategies.

In robotics, RL is used to train robots for tasks like object manipulation, path planning, and even
humanoid movement, allowing them to learn skills through trial and error in real or simulated
environments. This capability is essential for applications in industries ranging from manufacturing to
healthcare, where precision and adaptability are crucial. In autonomous driving, RL algorithms play a vital
role in navigation, obstacle avoidance, and decision-making by allowing vehicles to learn optimal driving
behaviors in dynamic, unpredictable traffic environments, enhancing road safety and efficiency.

Finance also benefits from RL, as trading algorithms learn to make strategic investment decisions
by analyzing market data and adapting to changing conditions, optimizing returns over time. Additionally,
in healthcare, RL is applied to personalize treatment plans and optimize medical interventions, where
agents learn to adjust dosage or select therapies based on individual patient responses, improving outcomes
in fields such as chronic disease management. The ability of RL to continuously improve through
experience and adapt to new challenges makes it a powerful tool across these diverse applications, each
requiring sophisticated problem-solving in ever-changing environments.

Reinforcement Learning’s versatility in AI problem-solving extends across several other impactful


domains. In energy management, RL helps optimize the allocation and consumption of resources within
smart grids, balancing supply and demand in real-time to minimize costs and reduce environmental impact.
By dynamically adjusting the operation of power plants and renewable energy sources, RL enables more
sustainable energy usage and grid stability. In supply chain and logistics, RL optimizes inventory
management, routing, and warehouse operations, allowing companies to streamline processes, reduce
costs, and improve customer satisfaction. By adapting to fluctuations in demand, weather, and
transportation availability, RL ensures that products are delivered efficiently and resources are utilized
optimally.

14
MODULE -5

Python Basics:
5.1 Features of Python:

1. Simplicity: The syntax is very simple almost similar to English


2. Built in libraries support: Adding 3 complex numbers in c/java: language min 15-20 lines in
python: 3-4 lines
3. No pre-requisites:
4. Emerging areas applications support:

 Robotics

 IOT

 AIML

 Computer vision

 Data science

 Natural Language Processing


5. Web applications

 GUI Applications

 Mobile Applications
6. Reduces the programmer’s work

 Example:

 Factorial

 c/java: min 10-12 lines

 In python we can math module and simply we will call factorial in one line.

14
5.2 Environments:

 Python IDLE
 PyCharm
 Jupiter notepad
 Spyder
 Visual studio code
 Google Collab-cloud based
Central Processing Unit
GPU: Graphical Processing Unit
TPU: Tensor Processing Unit

5.3 Python Applications:

1. .py
2. .ipynb-interactive python notebook

 Write source code

 Write documentation

 Output in same notebook

 We will get visualizations

o Bar chart
o Graph
o Pie chart etc.…

5.4 Data types in python:


Type () is a function used to know the datatype in python.
Basic data types:
1. Numerical Datatypes
a) integers: The numbers which does not have any fractional
part ex: 10,2,346363,4737373,90
b) float: The number which does have fractional part

15
ex: 2.3, 5.5754,6.799
c) complex: The number which is having real and imaginary parts (a+ib)
a- real part
b- imaginary
part syntax:
a+b*j
ex: 2+4j

2.Boolean: True or False


ex: a=10
print(a==20)
ex: a=10
print(a==10)
True
print(a==20)
False
type(a==10)
<class ‘bool’>
type(a==20)
<class ‘bool’>

3.Strings: Collection of characters. It may enclosed in “or in “


ex:
s="hello"
type(s)
<class ‘str’>
s1="abc123"
type(s)
<class ‘str’>
s2="2432"
type(s2)
<class ‘str’>
s3="1213@#$"
16
type(s3)
<class ‘str’>
s4='abc12'
type(s4)
<class ‘str’>
s5='''hello
this
is
also
string
only'''
type(s5)
<class ‘str’>
s6="""this
is
one
more"""
type(s6)
<class ‘str’>
Advanced datatypes:
1. Lists
2. Tuples
3. Dictionaries

5.5 Operations in Python:

1. Arithematical Operators
/ (division): returns float value
// (floor division): returns int value
% (modulus): returns the reminder
** (exponential): power

17
2. Relational operators: Are going to return Boolean values
<:5<6 True
>:5>6 False
<=: 5<=6 True
>=: 5>=6 False
==: 5==6 False
! =: 5!=6 True
ex: #Realtional operators
a=100
b=200
a<b
True
a>b
False
a<=b
True
a>=b
False
a==b
False
a! = b
True

3. Logical operators: Also return Boolean values only


This logical operators are going to work on truth tables
and: or: not:
ex: #Logical operators
a=10
b=20
a==5 and b==10
False
a==10 or b==20
True

18
not a==30
True

4. Assignment operators:
= (normal assignment)
ex:
a=10
+= (addition assignment): First we will add then assign

5. Bitwise operators: These operators will be operated on bits (binary digit) = 0 or 1


bitwise and &:
if both inputs are 1 then only output is 1
otherwise 0
bitwise or |
if both inputs are 0 then only output is 0
otherwise 1
bitwise not ~:
if input is 0 output is 1
left shift <<
right shift >>
bitwise xor ^

19
5.6 Advanced Datatypes:
Lists:
collection of datatypes (same datatypes or other datatypes)
List can be created using [ ]
List index or position will starts from zero
EX: #LISTS
L=[]
type(L)
<class 'list'> L=[10,20,30,40]
type(L)
<class 'list'> M=[10,1.2,'hello',13,2+6j]
type(M)
<class 'list'> operations on Lists:
1. append(): adding elements to a list (at
the end) #append A=[]
A. append(10) A.append(20) A.append(30) A.append(40)
print(A) [10, 20, 30, 40]
append() will add elements at the end only
so we can't add the elements in between or in other places

2. insert(): will add elements at any position


syntax: Listname.insert(pos,val) ex:
#insert A.insert(1,15) print(A)
[10, 15, 20, 30, 40, 50]
A. insert(3,25) print(A)
[10, 15, 20, 25, 30, 40, 50]

3. delete: Deleting an element from a list


syntax: del Listname[index] ex:
del L[0] ex:
#delete print(A)
[10, 15, 20, 25, 30, 40, 50]
del A[0] print(A)

20
[15, 20, 25, 30, 40, 50]
del A[2] print(A)
[15, 20, 30, 40, 50]

4. update:
modifying list elements syntax: Listname[index]=new_value
ex:
#update print(A)
[15, 20, 30, 40, 50] A[0]=100
print(A)
[100, 20, 30, 40, 50]
A[1]=200
print(A)
[100, 200, 30, 40, 50]
A[2]=300
print(A)
[100, 200, 300, 40, 50]

5. Count:
It Will give the number of elements in list syntax:
Len(Listname)
Ex:
#count len(A)

6. Repeat:
repeating elements in a list syntax:
listname*n ex:
#Repeat print(A)
[100, 200, 300, 40, 50]
A*3
[100, 200, 300, 40, 50, 100, 200, 300, 40, 50, 100,
200, 300, 40, 50]

21
7. Accessing or printing:
By using index we can print the list elements index will starts from 0 (Positive indexing) index will
starts from -1 (Negative indexing) by using negative indexing also we can access list elements
syntax: listname[index] ex:
L[0]
#Accessing
A[0] 100 A[1] 200 A[2] 300 A[-1] 50 A[-2] 40 A[-3] 300

8. min and max: syntax:


min(list) ex:
min(A) 40
print(A)
[100, 200, 300, 40, 50]
max(A) 300

9. merge:
combining both lists syntax:
List1+List2 ex:
#merge print(A)
[100, 200, 300, 40, 50] B=[10,20,30]
A+B
[100, 200, 300, 40, 50, 10, 20, 30]

10. extend:
some what similar to merge syntax:
List1.extend(List2) ex:
C=[10,20,30]
D=[40,50,60]
C+D
[10, 20, 30, 40, 50, 60]
print(C)

22
[10, 20, 30]
C.extend(D) C
[10, 20, 30, 40, 50, 60]

11. slicing:
accessing part of a list
syntax: listname[leftindex:rightindex] ex:
L[1:4] L[1]
L[2]
L[3]
ex:
#slice print(A)
[100, 200, 300, 40, 50, 10, 20, 30] A[1:4]
[200, 300, 40]
A[2:4]
[300, 40]
A[:3]
[100, 200, 300]
A[1:]
[200, 300, 40, 50, 10, 20, 30]
print(A)
[100, 200, 300, 40, 50, 10, 20, 30]

12. sort:
arranging the elements in order syntax:
List.sort() ex:
#sort print(A)
[100, 200, 300, 40, 50, 10, 20, 30]
A.sort() print(A)
[10, 20, 30, 40, 50, 100, 200, 300]

23
#reverse A.sort(reverse=True) A
[300, 200, 100, 50, 40, 30, 20, 10]

13. copy: Syntax:


new_list=old_list

5.7 List Implementation:


1. APPEND OPERATION:
L=[]
print("APPEND") e1=int(input("enter first element")) L.append(e1)
print("after adding the first element the list is",L)
e2=int(input("enter second element")) L.append(e2)
print("after adding the second element the list is",L)
e3=int(input("enter third element")) L.append(e3)
print("after adding the third element the list is",L)

2. INSERT OPERATION:
print("INSERT")
i1=int(input("enter the position where we want to insert"))
v1=int(input("enter the value")) L.insert(i1,v1)
print("element inserted")
i2=int(input("enter the position where we want to insert"))
v2=int(input("enter the value")) L.insert(i2,v2)
print("element inserted")
print("after insert operation the list is",L)

3. DELETE OPERATION:
print("DELETE")
d1=int(input("enter the position of element to be deleted"))
del L[d1]
print("after deleting the element the list is",L)

24
4. UPDATE OPERATION:
print("UPDATE")
u1=int(input("enter the index of the element to be updated"))
new_val=int(input('enter the new value'))
L[u1]=new_val
print("after update the list is",L)

5. REPEAT OPERATION:
print("REPEAT")
r=int(input('enter how many times you want to repeat the list elements'))
print("The repeated list is",L*r)
6. COUNT
7. MIN
8. MAX
print('COUNT')
print("The number of elemets in list is",len(L)) print("MIN:minimum element in list is",min(L))
print("MAX:maximum element in list is",max(L))

9. SLICING:
print('SLICING USING BOTH INDEXES')
l1=int(input('enter left index')) r1=int(input('enter right index')) print("After Slicing the list is",L[l1:r1])
print('SLICING USING LEFT INDEX ONLY')
l2=int(input('enter left index')) print("After Slicing using left index the list is",L[l1:])
print('SLICING USING RIGHT INDEX ONLY')
r2=int(input('enter right index')) print("After Slicing the list is",L[:r1])

10. MERGE OPERATION:


print("MERGE") M=[100,200,300]
print("after merging the list is",L+M)

25
11. EXTEND OPERATION:
print("EXTEND") L.extend(M)
print('after extending the original list
is',L)

12. SORT OPERATION:


print("ASCENDING ORDER SORTING")
print("after sotring in ascending order the list is",L.sort())
print("DESCENDING ORDER SORTING")
print("after sorting in descending order the list is",L.sort(reverse=True))

13. COPY OPERATION:


print("COPY")
A=L
print("after copying the List L in to A. The list A is",A)

14. PRINTING/ACCESSING ELEMENTS OPERATION:


print("ACCESSING")
print('first element in list is',L[0])
print('last element in list is',L[-1])

5.8 TUPLE:
Tuple is some what similar to list
but the main difference between Tuple and List is
List is mutable where as tuple is immutable Mutable means the elements can be added,deleted, updated
etc..
where as immutable means the elements can't be
changed. Tuple can be created with ( ) ex:
t=(100,20,10,30,50,70,40)
1. append
2. insert
3. update
4. delete

26
5. sort
6.extend
The above operations are tring to change the structure of tuple. Hence, they are not possible t.sort()
10,20,30,40,50,70,100
is slicing possible in tuple? yes
is merging possible in tuple? yes
is extend possible in tuple? no
is repeat possible? yes
When we can prefer tuple than lists?
We will prefer Tuple when we are going to work with static data
or if we are going to work with constants A tuple can be convetred into list

5.9 SETS:
Set is collection of unordered elements List is
ordered Tuple also ordered
ordered means sequential and can be accessed using index
unordered means we are unable to access using index
sets can be created using { } operations on sets

Operations on Sets:
1. add():
adding elements to set
ex: #sets
s={10,20,30,40}
type(s)
<class 'set'> #add s.add(50)
s.add(60) s
{40, 10, 50, 20, 60, 30}

2. delete:
deleting elements from a set
remove()

27
ex: s.remove(20) s
{40, 10, 50, 60, 30}
pop()
ex: s.pop() 40 s
{10, 50, 60, 30}

3. Union:
Combining elements from both sets ex:
s1={10,20,30} s2={100,200,300}
s1.union(s2)
{20, 100, 200, 10, 300, 30}

4. Intersection:
Finding the common elements from both the sets
ex: #INTERSECTION
s1.intersection(s2) set() s3={100,1000,500}
s2.intersection(s3)
{100}

5. Set difference:
will give us the remaining elements from first set excluding the common
element ex:
#set difference s2-s3
{200, 300}
s3-s2
{1000, 500}

6. superset and subset:


ex:
A={1,2,3}
B={1,2}
B is subset of A or
A is superset of B
28
7. Set Equality:
Both sets said to be equal if they have same
elements ex: A={1,2,3}
B={1,2,3}
A==B

8. Frozenset():
The set which is readonly or freezed (not modifiable)

9. discard() :
To remove an element from set
remove() pop()
******
discard() and remove()
discard() will not give any error eventhough the element to be deleted is not available in
set but remove() will give us error if the deleted element not there in set
ex:
A
{50, 20, 40, 30}

29
ANNEXURE

TRAFFIC SIGN RECOGNITION PROJECT

CODE AND RESULT OF DEMO PROJECT:

IMPORTING LIBRARIES

30
31
`

32
Fig 1.1(plt.imshow(X_tarin[0])) and Fig 1.2 (plt.imshow(X_test[0]))

33
Fig 1.3 (Speed limit (80km/h)) Fig 1.4 (Speed limit (70km/h))

Fig 1.6 (Road Work)


Fig 1.5
(No passing for vehicles over 3.5 metric tons)

34
Fig 1.8(Keep Right)

Fig 1.7(Turn left Ahead)

Fig 2.0 (Traffic Signal Ahead)


Fig 1.9
(Flaggers in road ahead warning)

35
Fig 2.1(plt.imshow(X_valid[0]))

36
Fig 2.2(Train Labels) Fig 2.3( Valid Labels) Fig 2.4( Test Labels)

37
Fig 2.5 (Original) Fig 2.6 (Scaled) Fig 2.7 (Translation) Fig 2.8 (Rotation)

38
Fig 2.9 (Labels of the Train and Augmentation)

39
40
41
42
Fig 3.0 (Epoch Value)

43
CONCLUSION

Completing the AI virtual internship with Skill Dzire has been an invaluable experience that
deepened my understanding of artificial intelligence and its real-world applications. By gaining hands-on
experience with AI tools and frameworks, I developed strong technical skills and enhanced my problem-
solving capabilities. This internship has not only broadened my expertise in machine learning, natural
language processing, and data analysis but has also prepared me to tackle complex challenges in AI with a
strategic approach. I am now better equipped to pursue advanced opportunities in the field of AI, confident
in my ability to contribute to innovative, data-driven solutions.

44
EXECUTIVE SUMMARY

This report summarizes my two-month virtual internship experience in artificial intelligence (AI)
with Skill Dzire. Over the course of the internship, I gained hands-on experience with key AI concepts and
tools, working on practical projects that included data analysis, machine learning, and model evaluation.
This experience allowed me to apply theoretical knowledge in a professional environment, deepening my
understanding of AI workflows and problem-solving techniques.

I had the chance to work on advanced topics such as neural networks, natural language processing,
and computer vision, which expanded my technical skills and introduced me to the diverse applications of
AI across industries. The mentorship I received was invaluable, as industry professionals guided me
through complex tasks and provided insights into the ethical considerations essential to responsible AI
development. Working alongside fellow interns also enhanced my communication, teamwork, and
adaptability skills, as we collaborated to tackle challenges and share knowledge.

Overall, this two-month internship significantly strengthened my foundation in AI, giving me both
the technical skills and confidence to pursue a career in this dynamic field. I am grateful for the
opportunity to learn and grow at Skill Dzire and look forward to applying these skills in future AI-driven
projects.

Effective communication and collaboration were also critical aspects of the internship, given the
virtual format. Coordinating with mentors and peers required proactive communication and adaptability, as
immediate feedback and quick problem-solving were sometimes limited. This experience highlighted the
importance of clear, concise communication in overcoming barriers to remote teamwork. Another
significant learning area was understanding the ethical implications of AI. Ensuring responsible AI
practices, such as minimizing biases and maintaining transparency, required awareness of both technical
and ethical dimensions, which deepened my perspective on the broader societal impact of AI.

45
ABOUT THE COMPANY

Skill Dzire is a leading training and skill development company focused on empowering individuals with
the practical skills and industry knowledge needed to excel in emerging fields such as Artificial
Intelligence (AI), Machine Learning (ML), Data Science, and software development. By bridging the gap
between theoretical learning and industry demands, Skill Dzire is dedicated to fostering a skilled
workforce that is prepared to tackle real-world challenges in today’s technology-driven world.

Mission:
Skill Dzire’s mission is to empower individuals with cutting-edge technical skills, practical experience,
and real-world knowledge, making them job-ready and competitive in the global workforce. Through high-
quality training, hands-on projects, and industry-aligned curriculum, Skill Dzire aims to build a robust,
skilled talent pool that meets the evolving needs of industries and contributes positively to society.

Vision:
Skill Dzire envisions a future where accessible quality skill development allows individuals from all
backgrounds to succeed and thrive in high-demand fields. By becoming a leading force in technical
education and professional training, Skill Dzire seeks to contribute to the creation of a sustainable,
innovation-driven economy, where technology and skill development play a pivotal role in societal
advancement.

46
OPPORTUNITIES:

During these two-month AI virtual internship, I was given the opportunity to perform the
following role:

Intern:

 Regular Team Coordination: Collaborated with team members and mentors consistently to
discuss project progress, attend meetings, and stay aligned on objectives and tasks.

 Hands-on Experience with AI Tools: Learned and applied various tools and platforms for
developing AI models and analysing data, enhancing practical skills in AI technologies.

 Referencing Resources: Utilized GitHub repositories and online resources to deepen knowledge
on AI concepts and techniques relevant to the project.

 Requirement Gathering: Gathered and analysed project requirements to understand objectives


and define goals for effective AI model development.

 Cross-Project Engagement: Gained opportunities to explore and voluntarily contribute to other AI


projects, broadening exposure and gaining insights into different applications of AI.

 Task-Specific Assignments: Worked on specific tasks related to developing various components


of AI models and pipelines, building technical skills incrementally.

 Skill Assessment: Completed skill-based assessments and tests at the end of the internship,
certifying knowledge and application of AI concepts and tools.

47
TRAINING:

During the two-month AI virtual internship, I received intensive training in core concepts of
Artificial Intelligence, machine learning techniques, and Python programming. This training was essential
in building a strong foundation in AI development and data processing.

AI Concepts and Techniques:


Training in AI covered a wide range of topics and tools, providing an understanding of how to build and
implement intelligent models for various applications.

1. Data Collection and Preprocessing: Learned methods to gather, clean, and preprocess raw data, which
is critical for ensuring high-quality input for AI models.
2.Supervised and Unsupervised Learning: Understood the difference between supervised and
unsupervised learning, learning to apply algorithms such as regression, classification, clustering, and
dimensionality reduction.
3. Neural Networks: Gained insight into building and training neural networks, understanding layers,
activation functions, and backpropagation to enhance model accuracy and complexity.
4. Natural Language Processing (NLP): Trained in NLP techniques, including tokenization, stemming,
and sentiment analysis, for processing and understanding text data.
5.Computer Vision: Explored computer vision techniques, such as image classification and object
detection, which enabled me to work with visual data effectively.
6.Model Evaluation and Optimization: Learned how to assess model performance using evaluation
metrics and techniques like cross-validation, and to optimize models through hyperparameter tuning.
7. Ethics and Responsible AI: Covered essential topics on responsible AI, focusing on data privacy, bias
mitigation, and ethical decision-making in AI projects.

48
Python Programming:
Python was the primary programming language used throughout the internship, given its extensive libraries
and ease of use in AI development.

1. Python Basics: Reinforced the fundamentals of Python, including data types, loops, functions,
and object-oriented programming, to ensure proficiency with the language.

2. Data Handling with Pandas: Trained in using the Pandas library for data manipulation, including
data cleaning, transformation, and aggregation, which was crucial for handling large datasets.

3. Numerical Computing with NumPy: Utilized the NumPy library for efficient numerical
computations, matrix operations, and handling multidimensional arrays, which are frequently used in
machine learning.

4.Data Visualization with Matplotlib and Seaborn: Learned to create visual representations of data
using libraries like Matplotlib and Seaborn, which helped in understanding trends, patterns, and
insights within datasets.

5. Machine Learning Libraries (Scikit-Learn): Received training in Scikit-Learn, a key library for
implementing machine learning algorithms such as linear regression, k-means clustering, and decision
trees.

6. Deep Learning with TensorFlow and Keras: Gained hands-on experience with TensorFlow and
Keras for building, training, and deploying neural network models, enabling a deeper dive into deep
learning applications.

7.Debugging and Testing: Emphasized best practices for debugging code, writing efficient functions,
and testing model performance to ensure robust and accurate results.

The training provided during this internship equipped me with essential AI and Python
programming skills, laying a strong foundation for developing effective AI-driven
solutions.

49
CHALLENGES FACED:

 Steep Learning Curve: Understanding complex AI concepts, especially in areas


like machine learning and neural networks, required intensive learning within a
limited time frame.
 Limited Access to Computing Resources: Many AI tasks need high processing
power, and working within the constraints of available hardware made it
challenging to run extensive simulations and handle large datasets effectively.
 Debugging and Troubleshooting: Identifying and resolving errors, especially in
neural networks or large datasets, was time-consuming and required meticulous
debugging.
 Challenges of Virtual Format: Limited in-person interaction made quick
resolution of doubts and receiving feedback harder, impacting the flow of
collaborative learning.
 Maintaining Self-Discipline: Staying motivated and managing time effectively
was necessary to meet learning goals and project deadlines in a remote
environment.
 Understanding Ethical and Responsible AI: Learning to address issues like
bias, fairness, and transparency in AI models was essential but challenging,
requiring a grasp of ethical and regulatory considerations. Here’s a summary of
the challenges faced during the two-month AI virtual internship:
 Complexity of AI Concepts: Understanding advanced AI topics like neural
networks, natural language processing, and computer vision was challenging due
to their technical depth, requiring significant time and effort to grasp fully.
 Limited Access to High-Performance Computing Resources: Many AI tasks
demand substantial computational power. Working remotely with limited
hardware made training complex models and running simulations slower and
more challenging.
 Debugging and Troubleshooting: AI projects, especially those involving large
datasets and complex algorithms, required meticulous debugging, which was
often time-consuming and demanding, particularly when errors were difficult to
identify.

50
6
6
6
6
6

You might also like