21A91A04C3
21A91A04C3
BACHELOR OF TECHNOLOGY
In
ELECTRONICS AND COMMUNICATION ENGINEERING
Submitted
by
M.NITHYASRI
22A91A0441
(A)) 2024-2025
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING
CERTIFICATE
This is to certify that the internship report entitled “AWS CLOUD” is being submitted by
M.NITHYASRI (22A91A0441) in partial fulfillment of the requirements for award of the B.Tech., degree
in Electronics and Communication Engineering for the academic year 2024-2025.
I hereby declare that the internship report entitled “AWS CLOUD” is a genuine report. This work has
been submitted to the ADITYA UNIVERSITY, Surampalem, in partial fulfillment of the B.Tech., degree.
I further declare that this report has not been submitted in full or part of the award of any degree of this or
any other educational institutions.
by
M.NITHYASRI
(22A91A0441)
INTERNSHIP COMPLETION CERTIFICATE
ACKNOWLEDGEMENT
First, I would like to thank the Director of Technical Hub, Surampalem for giving me the
opportunity to do an internship within the organization. I also would like all the people that worked along
with me in Technical Hub, Surampalem with their patience and openness they created an enjoyable
working environment.
It is with immense pleasure that we would like to express our indebted gratitude to our internship
coordinator Ch. V. Kiranmayi, Assistant Professor who has guided us a lot and encouraged us in every
step of the intern project work, her valuable moral support and guidance throughout the Intern project
helped us to a greater extent.
I am grateful to Dr. Sanjeev Kumar, Associate Professor and HOD for inspiring us all the way and for
arranging all the facilities and resources needed for my intern project work.
I wish to thank our Dr. M.V. Rajesh, Associate Dean and Dr. Dola Sanjay, Dean School of Engineering
for their encouragement and support during the course of my intern project work.
I would like to extend my sincere thanks to Dr. G. Suresh, Registrar, Dr. S. Rama Sree, Pro Vice-
Chancellor, Dr. M.B. Srinivas, Vice-Chancellor, Dr. M. Sreenivasa Reddy, Deputy Pro Chancellor
and Management, Aditya University for unconditional support for providing me the best infrastructural
facilities and state of the art laboratories during my intern project work.
Not to forget, Faculty, Lab Technicians, Non-Teaching Staff and our Friends who have directly or
helped and supported us in completing my intern project work in time.
ABSTARCT
This report summarizes my internship at Amazon Web Services (AWS), where I gained hands-on
experience in cloud computing by working with key AWS services such as EC2, S3, Lambda,
RDS, CloudFormation, and IAM. During the internship, I assisted with configuring and optimizing
cloud resources, automating infrastructure deployments, monitoring system performance, and
implementing security best practices. I also contributed to troubleshooting scalability and
availability issues while working with tools like AWS CLI, Terraform, and CloudWatch to improve
operational efficiency.
Some internships are used to allow individuals toperform scientific research while
others are specifically designed to allow people to gain first-hand experience
working.
Utilizing internships is a great way to build your resume and develop skills that
can be emphasized in your resume for future jobs. When you are applying for a
Training Internship, make sure to highlight any special skills or talents that can
make you stand apart from the rest of the applicants so that you have an improved
chance of landing the position.
WEEKLY OVERVIEW OF INTERNSHIP ACTIVITIES
2024
25-06- Tuesday Linux User and Groups
2024
th
instances
10-07-2024 Wednesday Activity on web application deployment using EC2
compute service
11-07-2024 Thursday Introduction to Storage technologies
12-07-2024 Friday Block vs Object Storage services
13-07-2024 Saturday Working with AWS S3
DATE DAY NAME OF THE TOPIC/MODULE
COMPLETED
15-07-2024 Monday Volumes and Snapshots using AWS Elastic Block
Storage
7thWEEK
1 1.1 plt.imshow(X_tarin[0]) 33
2 1.2 plt.imshow(X_test[0]) 33
11 2.1 plt.imshow(X_valid[0]) 36
15 2.5 Original 38
16 2.6 Scaled 38
17 2.7 Translation 38
18 2.8 Rotation 38
Amazon Web Services (AWS) is a comprehensive and widely adopted cloud platform, offering a vast
range of services that enable organizations to move, manage, and scale their workloads in the cloud. As one of the
earliest pioneers of cloud computing, AWS has grown to become the most extensive and widely used cloud
infrastructure in the world, powering millions of customers across various sectors, including startups, enterprises,
government agencies, and non-profit organizations. The platform delivers on-demand computing resources—such
as storage, databases, networking, and machine learning—over the internet, eliminating the need for businesses to
invest in and manage physical hardware and data centers.
One of the key advantages of AWS is its scalability and flexibility. Organizations can quickly scale their
infrastructure up or down based on demand, reducing costs and improving efficiency. AWS operates on a pay-as-
you-go pricing model, where users only pay for the services they consume, which helps organizations avoid the
high upfront costs of traditional IT infrastructure. Among its many services, AWS offers compute power through
Amazon EC2 (Elastic Compute Cloud), object storage through Amazon S3 (Simple Storage Service), data
management via Amazon RDS (Relational Database Service), and serverless computing with AWS Lambda. These
services, along with tools for networking, analytics, security, and application development, provide a
comprehensive environment for building, deploying, and managing cloud-based applications.
In addition to its technical offerings, AWS provides robust security features, such as encryption, identity
and access management (IAM), and compliance with a wide range of global standards and regulations. AWS
ensures that data and applications are protected with high availability, fault tolerance, and disaster recovery
capabilities, making it a reliable choice for businesses that require secure, mission-critical applications. The
platform also supports DevOps practices by enabling automation and continuous integration and delivery (CI/CD),
making it easier to build and deploy applications in a consistent, repeatable manner.
1
Given the rapid growth of cloud adoption across industries, acquiring hands-on experience with AWS is
an essential skill for anyone pursuing a career in cloud computing, software engineering, DevOps, or cloud
architecture. This report details my internship experience at AWS, where I gained practical experience with key
cloud services and developed a deeper understanding of cloud infrastructure management. Throughout my
internship, I worked on various projects that involved deploying, optimizing, and securing AWS resources. This
report will provide insights into the specific AWS services I worked with, the challenges I faced, the technical skills
I developed, and the overall impact this experience had on my understanding of cloud computing and AWS's role in
modern IT environments.
In summary, AWS is a critical enabler of cloud innovation, helping businesses scale more efficiently,
reduce operational costs, and enhance their agility in today's fast-paced digital world. Through my internship, I was
able to gain firsthand knowledge of how AWS supports organizations in driving digital transformation, and I have
developed the skills and expertise necessary to navigate the cloud ecosystem with confidence. The hands-on
experience with AWS has provided me with a solid foundation in cloud computing, equipping me with the tools and
insights to contribute effectively to cloud-based projects in the future.
1
MODULE -1
Amazon Web Services (AWS) is a comprehensive and widely adopted cloud platform that provides a broad
range of infrastructure services, including computing power, storage solutions, and networking capabilities. During
my internship, the primary objective was to grasp the fundamental services and tools offered by AWS, understand
their role in building scalable and resilient cloud architectures, and explore how businesses leverage these services
to address their technological requirements.
This report outlines the AWS fundamentals learned during the internship, emphasizing key services,
practical applications, and hands-on experiences. AWS empowers businesses to operate applications and store data
without the need to maintain on-premise hardware, offering cloud-based solutions that are scalable and reliable.
A key advantage of AWS is its pay-as-you-go pricing model, which ensures users only pay for the resources
they consume. This model enhances scalability and flexibility, making it ideal for businesses with varying
workloads. The AWS global infrastructure is structured around geographic regions and availability zones (AZs),
which consist of multiple data centers. This design ensures high availability, fault tolerance, and low latency for
applications.
AWS provides cloud-based solutions that allow businesses to run applications and store data without needing
to maintain on-premise hardware. AWS operates on a pay-as-you-go pricing model, where users only pay for the
resources they use, which provides scalability and flexibility. The AWS global infrastructure consists of data centers
organized into geographic regions and availability zones (AZs), ensuring high availability and fault tolerance .
2
1.2 Overview Of AWS Cloud Operations :
AWS enables organizations to transition from traditional, on-premise hardware to fully cloud-based
environments. This shift provides the following key benefits:
1. Flexibility: Businesses can dynamically adjust resource allocation based on current workloads, ensuring that
they only use what they need. For instance, applications can scale up during peak usage periods (e.g., sales
events) and scale down during quieter times, reducing unnecessary expenditure.
2. Reduced Capital Expenditure: By leveraging AWS’s infrastructure, businesses eliminate the need for
costly upfront investments in physical servers, storage, and networking hardware. This is particularly
beneficial for startups and small businesses, which can invest their resources in innovation rather than
infrastructure.
3. Remote Access: Cloud-based operations enable teams to access resources and applications from anywhere,
ensuring seamless operations even for distributed or remote workforces.
4. Accelerated Deployment: AWS's pre-configured services and tools allow businesses to quickly deploy
applications without worrying about hardware setup or compatibility issues.
5. Eco-Friendly Operations: AWS data centers are optimized for energy efficiency, helping organizations
reduce their carbon footprint compared to maintaining on-premise systems.
Pay-As-You-Go Model
AWS's pay-as-you-go pricing structure revolutionizes the way businesses approach budgeting for IT
resources. Key aspects include:
1. Cost Efficiency: Businesses pay only for the resources they use, such as computing power, storage, or data
transfer, on an hourly or per-second basis (depending on the service).
2. Predictable Billing: Tools like AWS Cost Explorer and Billing Dashboard provide detailed insights into
resource consumption, allowing businesses to forecast and manage expenses effectively.
3. Scaling Without Overspending: The model supports elasticity, meaning businesses can expand their
operations during high demand and shrink them back when demand decreases—without being locked into
long-term commitments.
4. Reserved and Spot Instances: AWS offers additional cost-saving mechanisms, such as:
o Reserved Instances: Lower pricing for customers who commit to using specific resources for 1 or 3 years.
o Spot Instances: Heavily discounted rates for spare compute capacity, ideal for flexible and non-time-sensitive
workloads.
1. Regions:
AWS is divided into multiple Regions, which are geographically separate areas around the globe (e.g.,
North America, Europe, Asia-Pacific). Each region operates independently to offer localized services while
maintaining compliance with specific regional regulations.
2. Availability Zones (AZs):
Each region consists of multiple Availability Zones, which are clusters of data centers. These AZs are
physically separated to prevent failures in one AZ from affecting another. Businesses can distribute their
applications across multiple AZs to achieve high availability and fault tolerance.
3. Edge Locations:
AWS also employs edge locations as part of its Amazon CloudFront content delivery network (CDN).
These locations cache data closer to end-users, reducing latency and improving user experience.
4. Disaster Recovery and Fault Tolerance:
o Multi-Region Deployment: Businesses can replicate their applications across regions to ensure
uninterrupted service in case of outages.
o Automated Backups and Replication: Services like AWS Backup and Cross-Region Replication
allow businesses to safeguard data and recover quickly from disasters.
5. Security:
AWS infrastructure is designed with security in mind, offering:
o End-to-end encryption for data in transit and at rest.
o Physical security for data centers.
o Compliance with global standards such as GDPR, ISO, and HIPAA.
MODULE-2
2. AWS Lambda
Lambda enables serverless computing, where code runs in response to events without needing server
management.
o Internship Applications:
Automated workflows by writing Lambda functions to process incoming data streams via
Amazon Kinesis.
Integrated Lambda with Amazon S3 to trigger events when files were uploaded, such as
generating metadata or running data transformations.
o Key Features:
Supports multiple programming languages.
Pay only for the execution time of functions.
Scales automatically to handle high traffic.
3
4
2.2 Storage Services
5
Created VPCs with public and private subnets to host web servers and backend databases
securely.
Configured route tables and internet gateways for controlled traffic flow.
Set up security groups and network access control lists (NACLs) to restrict access based
on IPs and ports.
o Key Features:
Customizable CIDR block ranges for IP address management.
Integration with AWS Direct Connect for hybrid cloud setups.
1. Amazon RDS
2. RDS simplifies the setup and management of relational databases like MySQL, PostgreSQL, and SQL
Server.
o Internship Applications:
Deployed a MySQL database instance to support a web application backend.
Configured automated backups and read replicas to ensure high availability.
o Key Features:
Automated patching and backups.
Supports multi-AZ deployments for disaster recovery.
Provides performance insights for optimization.
3. Amazon DynamoDB
DynamoDB is a fully managed NoSQL database designed for high-performance, low-latency applications.
o Internship Applications:
Created a DynamoDB table to store real-time data from IoT sensors, ensuring consistent
performance during high data influx.
Integrated with DynamoDB Streams to trigger event-driven workflows via Lambda.
o Key Features:
Automatically scales throughput.
Provides in-memory caching with DAX (DynamoDB Accelerator).
6
7
2.5 Monitoring and Management Services
1. AWSCloudWatch
CloudWatch monitors AWS resources and applications, offering real-time metrics and customizable
dashboards.
o Internship Applications:
Configured alarms to notify administrators of abnormal resource utilization (e.g., high CPU
usage on EC2 instances).
Analyzed logs for troubleshooting and identifying performance bottlenecks.
o Key Features:
Integrated with all AWS services for comprehensive monitoring.
Supports custom metrics and log aggregation.
2. IAM
IAM manages access to AWS resources securely by defining roles, policies, and permissions.
o Internship Applications:
Configured user groups with appropriate permissions for developers and administrators.
Implemented least-privilege access policies to minimize security risks.
o Key Features:
Supports multi-factor authentication (MFA).
Centralized access control for AWS accounts.
5
MODULE – 3
Benefits of Cloud Services :
1. Compute Services
Examples:
Amazon EC2 (Elastic Compute Cloud): Virtual servers for running applications.
AWS Lambda: Serverless computing to run code on demand.
Amazon ECS (Elastic Container Service) & EKS (Elastic Kubernetes Service): Container orchestration
for microservices.
Benefits:
2. Storage Services
Examples:
Benefits:
Highly durable and available storage solutions (e.g., 99.999999999% durability in S3).
Secure storage with encryption options.
6
Cost optimization through lifecycle policies.
Seamless integration with compute and analytics services.
3. Database Services
Examples:
Amazon RDS (Relational Database Service): Managed relational databases like MySQL, PostgreSQL, and
SQL Server.
Amazon DynamoDB: A fully managed NoSQL database.
Amazon Redshift: A data warehousing service for analytics.
Benefits:
Examples:
Benefits:
Examples:
Benefits:
6. Developer Tools
Examples:
Benefits:
6
Continuous integration and delivery (CI/CD) pipelines.
Automation of build and deployment processes.
Enhanced collaboration between development and operations teams.
Examples:
Benefits:
8. Analytics
Examples:
Amazon EMR (Elastic MapReduce): Big data processing with Apache Hadoop.
Amazon Kinesis: Real-time data streaming and analytics.
Amazon Athena: Serverless SQL queries on data stored in S3.
Benefits:
AWS IoT services enable the connection, monitoring, and management of IoT devices.
Examples:
Benefits:
Examples:
Benefits:
6
The first layer is the Embedding Layer, which transforms input tokens (like words or characters)
into dense, fixed-size vectors that capture semantic relationships, creating a meaningful numerical
representation in a continuous vector space.
Following this, the Recurrent Layer (such as RNN, LSTM, or GRU) processes the data
sequentially, maintaining a hidden state that updates with each input, making it highly effective for tasks
where the context of prior elements in the sequence matters, like sentences or time-based data.
Next comes the Attention Layer, which calculates attention weights to focus on important parts of
the sequence relative to each other, allowing the model to identify which elements in the sequence are most
relevant at any given point. This layer is especially vital in transformer models and encoder-decoder
architectures, where it greatly enhances the model's ability to capture complex dependencies within the
data.
The Feedforward Layer (or Dense Layer) then applies transformations to the features extracted
from previous layers. In transformer models, this layer is applied after the self-attention mechanism to
further process and refine the features.
Finally, the Output Layer produces the final prediction or classification probabilities. For
classification tasks, this layer often includes a soft max activation function, while in regression or sequence
generation tasks, it may have a different structure or activation function. Together, these five layers enable
sequencer models to effectively handle sequential data, each layer building on the previous one to
progressively capture, transform, and output information.
7
MODULE-3
Artificial Intelligence (AI) has significantly advanced the field of computer vision, enabling
machines to interpret and process visual information from the world in ways similar to human vision. AI
vision, also known as computer vision, refers to the ability of computers to understand and analyze digital
images and videos to perform tasks such as object recognition, image classification, segmentation, and
tracking. This capability is crucial for many real-world applications, including autonomous vehicles, facial
recognition, medical imaging, and industrial automation. AI vision systems rely on large datasets of
labeled images and videos to train models capable of identifying patterns, detecting objects, and making
decisions based on visual inputs.
AI vision, image classification is one of the most fundamental tasks. Image classification involves
assigning a label or category to an image based on its content. For example, an image classification model
might be trained to identify whether an image contains a cat, a dog, or a car. The process of image
classification typically begins with data preprocessing, where raw images are prepared for analysis by
converting them into a numerical format that a machine learning model can understand. Common
preprocessing techniques include image resizing, normalization, and augmentation (such as rotating or
flipping images) to increase the diversity of training data. Following preprocessing, feature extraction is
performed to identify the distinctive patterns in the image that will be used for classification.
In traditional machine learning methods, this feature extraction was manual, requiring domain
expertise. However, with the advent of deep learning, feature extraction has become automatic through the
use of neural networks. Neural networks, particularly Convolutional Neural Networks (CNNs), have
revolutionized the field of image classification. CNNs are a class of deep learning models specifically
designed for processing grid-like data such as images
8
The advantage of CNNs is their ability to automatically learn which features are most relevant for a
specific task without requiring manual intervention. CNNs consist of several layers, including
convolutional layers, pooling layers (which down-sample the data to reduce its dimensionality), and fully
connected layers (which aggregate the learned features for final classification). The use of CNNs has made
tasks like object detection and facial recognition highly accurate and efficient.
In addition to image classification, neural networks are used for more complex tasks in AI vision,
such as object detection and image segmentation. Object detection goes beyond classifying an image by
identifying and localizing multiple objects within an image. This is particularly useful in autonomous
driving, where the system must recognize various objects, including pedestrians, traffic signs, and other
vehicles, while simultaneously determining their location within the scene. CNN-based models like YOLO
(You Only Look Once) and RCNN (Region-based Convolutional Neural Networks) have been
instrumental in achieving real-time object detection.
Similarly, image segmentation tasks involve classifying each pixel in an image, dividing it into
meaningful segments, such as distinguishing between the background and the foreground of an image. This
pixel-level understanding of images is critical in fields like medical imaging, where precise segmentation
of organs or tissues is required.
In conclusion, the intersection of AI vision, classification, and neural networks has opened up a
world of possibilities for automating visual recognition tasks across various industries. From improving
image classification accuracy with CNNs to creating synthetic images with GANs, these technologies
continue to push the boundaries of what is possible in computer vision.
This is the area of AI that allows computers to understand and process visual inputs such as images
or videos. Computer vision encompasses tasks like image recognition, object detection, segmentation, and
scene understanding. With advances in deep learning, AI vision has improved significantly, enabling
applications like facial recognition, autonomous vehicles, medical imaging, and augmented reality.
Image classification is a fundamental task in computer vision where the system assigns a label to an
image based on its content. For example, in a dataset of animals, the model might classify images as "cat,"
"dog," or "bird." Classification can also go beyond identifying general categories to more specific tasks,
like diagnosing medical conditions from X-ray images or detecting types of defects in manufacturing.
9
Convolutional Neural Networks (CNNs) are particularly popular in vision tasks because they are
designed to process grid-like data, such as images. CNNs can learn to detect patterns through layers of
filters that capture edges, textures, shapes, and higher-order features. These networks are trained on large
datasets to generalize and recognize similar patterns in new images. More complex architectures like
Renat, Inception, and VGG have further improved image classification accuracy and efficiency, while
newer models, such as Vision Transformers (VIT), apply transformer-based architectures to vision tasks,
achieving state-of-the-art performance in various computer vision benchmarks.
AI Vision, or computer vision, aims to replicate human sight and interpretation capabilities,
allowing machines to gather insights from visual data. This process goes beyond just identifying objects; it
includes understanding context, relationships between objects, and even complex scenarios, such as
detecting emotions or understanding interactions in a scene. One of the foundational techniques in AI
vision is image preprocessing, which involves preparing images for analysis by improving quality,
reducing noise, and standardizing image sizes.
10
MODULE -4
4.1 Reinforcement Learning & AI Problem Solving:
This segment introduced reinforcement learning, which involves training AI models to make
decisions based on interactions with their environment. Topics covered included Markov Decision
Processes (MDPs), Q-learning, and policy gradients. The module also explored problem-solving
techniques using AI, including uninformed search methods like BFS and DFS, informed search algorithms
like A*, and constraint satisfaction problems (CSPs). These concepts are foundational in developing
intelligent systems capable of solving complex problems autonomously.
Reinforcement Learning (RL) is a crucial paradigm in artificial intelligence (AI) that enables
machines to learn by interacting with their environment. Unlike traditional supervised learning, where
models learn from labelled data, RL involves an agent that learns to make decisions through trial and error
by receiving rewards or penalties based on the actions it takes. The goal of RL is to develop a policy—a
strategy that tells the agent the best action to take in a given state to maximize the cumulative reward over
time. This approach has been instrumental in solving complex decision-making problems in various fields
such as robotics, gaming, finance, healthcare, and autonomous systems.
In RL, the agent operates within an environment and follows a cyclical process: it perceives the
state of the environment, selects an action, and then receives feedback in the form of a reward. This reward
serves as the signal that the agent uses to learn and improve its behavior over time. A key concept in RL is
the "exploration exploitation trade-off," where the agent must balance exploring new strategies to discover
better rewards versus exploiting known strategies that have yielded high rewards in the past. Over time, the
agent learns a policy that optimizes long-term rewards. Techniques like Q-learning, Deep Q-Networks
(DQN), and Policy Gradient methods have been developed to help agents efficiently learn in complex
environments.
The real strength of RL comes from its ability to solve sequential decision-making problems, where
the outcome of one decision impacts the next. For example, in gaming, an RL agent learns how to navigate
a series of moves to win, considering the consequences of each action on future rewards. Reinforcement
Learning (RL) is a type of machine learning where an agent learns to make decisions by interacting with an
environment. Unlike supervised learning, where models learn from labeled data, RL agents learn through
trial and error by receiving rewards or penalties for their actions, which gradually guides them toward
11
achieving a specified goal.
12
4.2 Applications of Reinforcement Learning in AI Problem Solving
Another domain where RL has shown great promise is in autonomous vehicles. The decision-
making process in self-driving cars involves navigating traffic, avoiding obstacles, and adhering to road
rules while minimizing the risk of accidents. RL provides a framework for these systems to learn optimal
driving policies by interacting with virtual or real environments, gradually improving through feedback
and real-world data. RL allows these systems to adapt to highly dynamic, uncertain environments, ensuring
safer and more efficient driving decisions.
In finance, RL is applied to trading algorithms, where the goal is to maximize long-term profit.
Traders face a complex environment with fluctuating market conditions, and RL enables systems to learn
strategies for buying and selling assets by evaluating the outcomes of their actions in various market states.
The system constantly refines its policy to balance risk and return. This approach is also useful in portfolio
management, where RL helps in learning optimal asset allocation strategies over time. Despite its
remarkable success, RL faces several challenges. One of the most significant is the issue of sample
inefficiency. RL often requires a large number of interactions with the environment to learn an optimal
policy, which is particularly problematic in environments where obtaining real-world data is expensive or
risky (e.g., healthcare or autonomous driving).
13
As RL continues to evolve, it will undoubtedly unlock new possibilities in AI driven solutions, pushing the
boundaries of what intelligent systems can achieve.
Reinforcement Learning (RL) is widely applied in AI to solve complex, dynamic problems where
decision-making unfolds over time, making it essential in areas requiring adaptive, sequential strategies. In
gaming, RL has achieved remarkable results, with AI agents mastering games such as Go, Dota 2, and
Chess, often surpassing human expertise by developing novel strategies.
In robotics, RL is used to train robots for tasks like object manipulation, path planning, and even
humanoid movement, allowing them to learn skills through trial and error in real or simulated
environments. This capability is essential for applications in industries ranging from manufacturing to
healthcare, where precision and adaptability are crucial. In autonomous driving, RL algorithms play a vital
role in navigation, obstacle avoidance, and decision-making by allowing vehicles to learn optimal driving
behaviors in dynamic, unpredictable traffic environments, enhancing road safety and efficiency.
Finance also benefits from RL, as trading algorithms learn to make strategic investment decisions
by analyzing market data and adapting to changing conditions, optimizing returns over time. Additionally,
in healthcare, RL is applied to personalize treatment plans and optimize medical interventions, where
agents learn to adjust dosage or select therapies based on individual patient responses, improving outcomes
in fields such as chronic disease management. The ability of RL to continuously improve through
experience and adapt to new challenges makes it a powerful tool across these diverse applications, each
requiring sophisticated problem-solving in ever-changing environments.
14
MODULE -5
Python Basics:
5.1 Features of Python:
Robotics
IOT
AIML
Computer vision
Data science
GUI Applications
Mobile Applications
6. Reduces the programmer’s work
Example:
Factorial
In python we can math module and simply we will call factorial in one line.
14
5.2 Environments:
Python IDLE
PyCharm
Jupiter notepad
Spyder
Visual studio code
Google Collab-cloud based
Central Processing Unit
GPU: Graphical Processing Unit
TPU: Tensor Processing Unit
1. .py
2. .ipynb-interactive python notebook
Write documentation
o Bar chart
o Graph
o Pie chart etc.…
15
ex: 2.3, 5.5754,6.799
c) complex: The number which is having real and imaginary parts (a+ib)
a- real part
b- imaginary
part syntax:
a+b*j
ex: 2+4j
1. Arithematical Operators
/ (division): returns float value
// (floor division): returns int value
% (modulus): returns the reminder
** (exponential): power
17
2. Relational operators: Are going to return Boolean values
<:5<6 True
>:5>6 False
<=: 5<=6 True
>=: 5>=6 False
==: 5==6 False
! =: 5!=6 True
ex: #Realtional operators
a=100
b=200
a<b
True
a>b
False
a<=b
True
a>=b
False
a==b
False
a! = b
True
18
not a==30
True
4. Assignment operators:
= (normal assignment)
ex:
a=10
+= (addition assignment): First we will add then assign
19
5.6 Advanced Datatypes:
Lists:
collection of datatypes (same datatypes or other datatypes)
List can be created using [ ]
List index or position will starts from zero
EX: #LISTS
L=[]
type(L)
<class 'list'> L=[10,20,30,40]
type(L)
<class 'list'> M=[10,1.2,'hello',13,2+6j]
type(M)
<class 'list'> operations on Lists:
1. append(): adding elements to a list (at
the end) #append A=[]
A. append(10) A.append(20) A.append(30) A.append(40)
print(A) [10, 20, 30, 40]
append() will add elements at the end only
so we can't add the elements in between or in other places
20
[15, 20, 25, 30, 40, 50]
del A[2] print(A)
[15, 20, 30, 40, 50]
4. update:
modifying list elements syntax: Listname[index]=new_value
ex:
#update print(A)
[15, 20, 30, 40, 50] A[0]=100
print(A)
[100, 20, 30, 40, 50]
A[1]=200
print(A)
[100, 200, 30, 40, 50]
A[2]=300
print(A)
[100, 200, 300, 40, 50]
5. Count:
It Will give the number of elements in list syntax:
Len(Listname)
Ex:
#count len(A)
6. Repeat:
repeating elements in a list syntax:
listname*n ex:
#Repeat print(A)
[100, 200, 300, 40, 50]
A*3
[100, 200, 300, 40, 50, 100, 200, 300, 40, 50, 100,
200, 300, 40, 50]
21
7. Accessing or printing:
By using index we can print the list elements index will starts from 0 (Positive indexing) index will
starts from -1 (Negative indexing) by using negative indexing also we can access list elements
syntax: listname[index] ex:
L[0]
#Accessing
A[0] 100 A[1] 200 A[2] 300 A[-1] 50 A[-2] 40 A[-3] 300
9. merge:
combining both lists syntax:
List1+List2 ex:
#merge print(A)
[100, 200, 300, 40, 50] B=[10,20,30]
A+B
[100, 200, 300, 40, 50, 10, 20, 30]
10. extend:
some what similar to merge syntax:
List1.extend(List2) ex:
C=[10,20,30]
D=[40,50,60]
C+D
[10, 20, 30, 40, 50, 60]
print(C)
22
[10, 20, 30]
C.extend(D) C
[10, 20, 30, 40, 50, 60]
11. slicing:
accessing part of a list
syntax: listname[leftindex:rightindex] ex:
L[1:4] L[1]
L[2]
L[3]
ex:
#slice print(A)
[100, 200, 300, 40, 50, 10, 20, 30] A[1:4]
[200, 300, 40]
A[2:4]
[300, 40]
A[:3]
[100, 200, 300]
A[1:]
[200, 300, 40, 50, 10, 20, 30]
print(A)
[100, 200, 300, 40, 50, 10, 20, 30]
12. sort:
arranging the elements in order syntax:
List.sort() ex:
#sort print(A)
[100, 200, 300, 40, 50, 10, 20, 30]
A.sort() print(A)
[10, 20, 30, 40, 50, 100, 200, 300]
23
#reverse A.sort(reverse=True) A
[300, 200, 100, 50, 40, 30, 20, 10]
2. INSERT OPERATION:
print("INSERT")
i1=int(input("enter the position where we want to insert"))
v1=int(input("enter the value")) L.insert(i1,v1)
print("element inserted")
i2=int(input("enter the position where we want to insert"))
v2=int(input("enter the value")) L.insert(i2,v2)
print("element inserted")
print("after insert operation the list is",L)
3. DELETE OPERATION:
print("DELETE")
d1=int(input("enter the position of element to be deleted"))
del L[d1]
print("after deleting the element the list is",L)
24
4. UPDATE OPERATION:
print("UPDATE")
u1=int(input("enter the index of the element to be updated"))
new_val=int(input('enter the new value'))
L[u1]=new_val
print("after update the list is",L)
5. REPEAT OPERATION:
print("REPEAT")
r=int(input('enter how many times you want to repeat the list elements'))
print("The repeated list is",L*r)
6. COUNT
7. MIN
8. MAX
print('COUNT')
print("The number of elemets in list is",len(L)) print("MIN:minimum element in list is",min(L))
print("MAX:maximum element in list is",max(L))
9. SLICING:
print('SLICING USING BOTH INDEXES')
l1=int(input('enter left index')) r1=int(input('enter right index')) print("After Slicing the list is",L[l1:r1])
print('SLICING USING LEFT INDEX ONLY')
l2=int(input('enter left index')) print("After Slicing using left index the list is",L[l1:])
print('SLICING USING RIGHT INDEX ONLY')
r2=int(input('enter right index')) print("After Slicing the list is",L[:r1])
25
11. EXTEND OPERATION:
print("EXTEND") L.extend(M)
print('after extending the original list
is',L)
5.8 TUPLE:
Tuple is some what similar to list
but the main difference between Tuple and List is
List is mutable where as tuple is immutable Mutable means the elements can be added,deleted, updated
etc..
where as immutable means the elements can't be
changed. Tuple can be created with ( ) ex:
t=(100,20,10,30,50,70,40)
1. append
2. insert
3. update
4. delete
26
5. sort
6.extend
The above operations are tring to change the structure of tuple. Hence, they are not possible t.sort()
10,20,30,40,50,70,100
is slicing possible in tuple? yes
is merging possible in tuple? yes
is extend possible in tuple? no
is repeat possible? yes
When we can prefer tuple than lists?
We will prefer Tuple when we are going to work with static data
or if we are going to work with constants A tuple can be convetred into list
5.9 SETS:
Set is collection of unordered elements List is
ordered Tuple also ordered
ordered means sequential and can be accessed using index
unordered means we are unable to access using index
sets can be created using { } operations on sets
Operations on Sets:
1. add():
adding elements to set
ex: #sets
s={10,20,30,40}
type(s)
<class 'set'> #add s.add(50)
s.add(60) s
{40, 10, 50, 20, 60, 30}
2. delete:
deleting elements from a set
remove()
27
ex: s.remove(20) s
{40, 10, 50, 60, 30}
pop()
ex: s.pop() 40 s
{10, 50, 60, 30}
3. Union:
Combining elements from both sets ex:
s1={10,20,30} s2={100,200,300}
s1.union(s2)
{20, 100, 200, 10, 300, 30}
4. Intersection:
Finding the common elements from both the sets
ex: #INTERSECTION
s1.intersection(s2) set() s3={100,1000,500}
s2.intersection(s3)
{100}
5. Set difference:
will give us the remaining elements from first set excluding the common
element ex:
#set difference s2-s3
{200, 300}
s3-s2
{1000, 500}
8. Frozenset():
The set which is readonly or freezed (not modifiable)
9. discard() :
To remove an element from set
remove() pop()
******
discard() and remove()
discard() will not give any error eventhough the element to be deleted is not available in
set but remove() will give us error if the deleted element not there in set
ex:
A
{50, 20, 40, 30}
29
ANNEXURE
IMPORTING LIBRARIES
30
31
`
32
Fig 1.1(plt.imshow(X_tarin[0])) and Fig 1.2 (plt.imshow(X_test[0]))
33
Fig 1.3 (Speed limit (80km/h)) Fig 1.4 (Speed limit (70km/h))
34
Fig 1.8(Keep Right)
35
Fig 2.1(plt.imshow(X_valid[0]))
36
Fig 2.2(Train Labels) Fig 2.3( Valid Labels) Fig 2.4( Test Labels)
37
Fig 2.5 (Original) Fig 2.6 (Scaled) Fig 2.7 (Translation) Fig 2.8 (Rotation)
38
Fig 2.9 (Labels of the Train and Augmentation)
39
40
41
42
Fig 3.0 (Epoch Value)
43
CONCLUSION
Completing the AI virtual internship with Skill Dzire has been an invaluable experience that
deepened my understanding of artificial intelligence and its real-world applications. By gaining hands-on
experience with AI tools and frameworks, I developed strong technical skills and enhanced my problem-
solving capabilities. This internship has not only broadened my expertise in machine learning, natural
language processing, and data analysis but has also prepared me to tackle complex challenges in AI with a
strategic approach. I am now better equipped to pursue advanced opportunities in the field of AI, confident
in my ability to contribute to innovative, data-driven solutions.
44
EXECUTIVE SUMMARY
This report summarizes my two-month virtual internship experience in artificial intelligence (AI)
with Skill Dzire. Over the course of the internship, I gained hands-on experience with key AI concepts and
tools, working on practical projects that included data analysis, machine learning, and model evaluation.
This experience allowed me to apply theoretical knowledge in a professional environment, deepening my
understanding of AI workflows and problem-solving techniques.
I had the chance to work on advanced topics such as neural networks, natural language processing,
and computer vision, which expanded my technical skills and introduced me to the diverse applications of
AI across industries. The mentorship I received was invaluable, as industry professionals guided me
through complex tasks and provided insights into the ethical considerations essential to responsible AI
development. Working alongside fellow interns also enhanced my communication, teamwork, and
adaptability skills, as we collaborated to tackle challenges and share knowledge.
Overall, this two-month internship significantly strengthened my foundation in AI, giving me both
the technical skills and confidence to pursue a career in this dynamic field. I am grateful for the
opportunity to learn and grow at Skill Dzire and look forward to applying these skills in future AI-driven
projects.
Effective communication and collaboration were also critical aspects of the internship, given the
virtual format. Coordinating with mentors and peers required proactive communication and adaptability, as
immediate feedback and quick problem-solving were sometimes limited. This experience highlighted the
importance of clear, concise communication in overcoming barriers to remote teamwork. Another
significant learning area was understanding the ethical implications of AI. Ensuring responsible AI
practices, such as minimizing biases and maintaining transparency, required awareness of both technical
and ethical dimensions, which deepened my perspective on the broader societal impact of AI.
45
ABOUT THE COMPANY
Skill Dzire is a leading training and skill development company focused on empowering individuals with
the practical skills and industry knowledge needed to excel in emerging fields such as Artificial
Intelligence (AI), Machine Learning (ML), Data Science, and software development. By bridging the gap
between theoretical learning and industry demands, Skill Dzire is dedicated to fostering a skilled
workforce that is prepared to tackle real-world challenges in today’s technology-driven world.
Mission:
Skill Dzire’s mission is to empower individuals with cutting-edge technical skills, practical experience,
and real-world knowledge, making them job-ready and competitive in the global workforce. Through high-
quality training, hands-on projects, and industry-aligned curriculum, Skill Dzire aims to build a robust,
skilled talent pool that meets the evolving needs of industries and contributes positively to society.
Vision:
Skill Dzire envisions a future where accessible quality skill development allows individuals from all
backgrounds to succeed and thrive in high-demand fields. By becoming a leading force in technical
education and professional training, Skill Dzire seeks to contribute to the creation of a sustainable,
innovation-driven economy, where technology and skill development play a pivotal role in societal
advancement.
46
OPPORTUNITIES:
During these two-month AI virtual internship, I was given the opportunity to perform the
following role:
Intern:
Regular Team Coordination: Collaborated with team members and mentors consistently to
discuss project progress, attend meetings, and stay aligned on objectives and tasks.
Hands-on Experience with AI Tools: Learned and applied various tools and platforms for
developing AI models and analysing data, enhancing practical skills in AI technologies.
Referencing Resources: Utilized GitHub repositories and online resources to deepen knowledge
on AI concepts and techniques relevant to the project.
Skill Assessment: Completed skill-based assessments and tests at the end of the internship,
certifying knowledge and application of AI concepts and tools.
47
TRAINING:
During the two-month AI virtual internship, I received intensive training in core concepts of
Artificial Intelligence, machine learning techniques, and Python programming. This training was essential
in building a strong foundation in AI development and data processing.
1. Data Collection and Preprocessing: Learned methods to gather, clean, and preprocess raw data, which
is critical for ensuring high-quality input for AI models.
2.Supervised and Unsupervised Learning: Understood the difference between supervised and
unsupervised learning, learning to apply algorithms such as regression, classification, clustering, and
dimensionality reduction.
3. Neural Networks: Gained insight into building and training neural networks, understanding layers,
activation functions, and backpropagation to enhance model accuracy and complexity.
4. Natural Language Processing (NLP): Trained in NLP techniques, including tokenization, stemming,
and sentiment analysis, for processing and understanding text data.
5.Computer Vision: Explored computer vision techniques, such as image classification and object
detection, which enabled me to work with visual data effectively.
6.Model Evaluation and Optimization: Learned how to assess model performance using evaluation
metrics and techniques like cross-validation, and to optimize models through hyperparameter tuning.
7. Ethics and Responsible AI: Covered essential topics on responsible AI, focusing on data privacy, bias
mitigation, and ethical decision-making in AI projects.
48
Python Programming:
Python was the primary programming language used throughout the internship, given its extensive libraries
and ease of use in AI development.
1. Python Basics: Reinforced the fundamentals of Python, including data types, loops, functions,
and object-oriented programming, to ensure proficiency with the language.
2. Data Handling with Pandas: Trained in using the Pandas library for data manipulation, including
data cleaning, transformation, and aggregation, which was crucial for handling large datasets.
3. Numerical Computing with NumPy: Utilized the NumPy library for efficient numerical
computations, matrix operations, and handling multidimensional arrays, which are frequently used in
machine learning.
4.Data Visualization with Matplotlib and Seaborn: Learned to create visual representations of data
using libraries like Matplotlib and Seaborn, which helped in understanding trends, patterns, and
insights within datasets.
5. Machine Learning Libraries (Scikit-Learn): Received training in Scikit-Learn, a key library for
implementing machine learning algorithms such as linear regression, k-means clustering, and decision
trees.
6. Deep Learning with TensorFlow and Keras: Gained hands-on experience with TensorFlow and
Keras for building, training, and deploying neural network models, enabling a deeper dive into deep
learning applications.
7.Debugging and Testing: Emphasized best practices for debugging code, writing efficient functions,
and testing model performance to ensure robust and accurate results.
The training provided during this internship equipped me with essential AI and Python
programming skills, laying a strong foundation for developing effective AI-driven
solutions.
49
CHALLENGES FACED:
50
6
6
6
6
6