CLASS-IX
CH-1 AI REFLECTION ,PROJECT CYCLE AND AI ETHICS
Introduction to AI
"Artificial intelligence is the ability of machines to perform tasks that typically require
human like intelligence, such as visual perception, speech recognition, decision-making, and
language translation."
The term Artificial Intelligence (AI) was coined by John McCarthy in his proposal for the 1956
Dartmouth Conference, the first artificial intelligence conference. He defined artificial
intelligence as "the science and engineering of making intelligent machines".
In other words, AI is the simulation of human intelligence in machines that are designed to
think, make decisions, predict the future, learn, and improve on its own.
Domains of AI
Statistical Data refers to statistical techniques to analyse, interpret and draw insights from
numerical/tabular data.
Computer Vision is an AI domain works with videos and images enabling machines to
interpret and understand visual information.
Natural Language Processing (NLP) is an AI domain focused on textual data enabling
machines to comprehend, generate, and manipulate human language.
Some AI Applications:
Face Lock in Smartphones
Smart assistants
Fraud and Risk Detection
Medical Imaging
Why do we need an AI Project Cycle?
AI Project Cycle
Problem scoping: The first step is to understand and define the problem that we want AI to
solve. Problem scoping is the stage where we set clear goals and outline the objectives of
the AI project
Data Acquisition: This stage focusses on collecting the relevant data required for the AI
system. Since this data forms the base of your project, care must be taken that the data is
collected from reliable and authentic sources.
Data Exploration: This stage involves exploration and analyses of the collected data to
interpret patterns, trends and relationships. The data is in large quantities, so in order to
understand the patterns easily, you can use different visual representations such as graphs,
databases, flow charts and maps.
Modelling: After exploring the patterns, you need to select the appropriate AI model to
achieve the goal. This model should be able to learn from the data and make predictions.
Evaluation: The selected AI model now needs to be tested and the results need to be
compared with the expected outcome. This helps in evaluating the accuracy and reliability
of the model and improving it.
Deployment: It is the process of implementing an AI model in a real-world scenario. The
model is integrated into the desired software or system and packaged in such a way that it
can be used for practical applications.
Problem scoping
The 4Ws Problem Canvas helps with problem scoping. It provides a structured framework
for analysing and understanding a problem. It consists of four parameters that are required
while solving a problem: Who? What? Where? Why?
Who: It defines who is directly or indirectly affected by the problem.
What: It determines if the problem really exists. It helps to clearly define the nature of the
problem.
Where: It helps to identify the situations where the problem is observed or has an impact.
Why: It helps to think about why the given problem is worth solving and how the solution
would benefit the stakeholders as well as society.
The Problem Statement Template
The Problem Statement Template aids in summarising all the key points in the 4Ws problem
canvas into a single template, which enables us to quickly get back the ideas as needed in
the future. For the purpose of further analysis and decision-making, this template makes it
simple to understand and remember the important aspects of the problem.
Data Acquisition
What is Data?
Data is a collection of facts or statistics collected for reference or analysis. Data can be
processed into meaningful information using various data analysis techniques and
algorithms. Data can be in the form of text, video, images, audio, and so on.
Types of Data : Data can be classified into the following categories while developing an AI
project:
Training data: Training data is the initial dataset used to train an AI model. It is a set of
examples that helps the AI model learn and identify patterns or perform particular tasks. We
must ensure that the data used to train the AI model is aligned with the problem statement
and is sufficient, relevant, accurate, and wide-ranging.
Testing data: Testing data is used to evaluate the performance of the AI model. It is the data
that the AI algorithm has not seen before. It allows to check the accuracy of the AI model.
Testing data should represent the information that the AI model will encounter practically in
real-world situations.
Data Features Data features describe the type of information that will be collected in
response to the problem statement.
Sources of Data Once the data features have been shortlisted, we need to find sources to
acquire the required data. Some of the sources of data are:
System Maps : System maps are visual diagrams that help to see and understand the
different parts or elements of the AI project. They show how all the elements are connected
or related to each other. They can be used to understand the system’s boundaries and how
it interacts with elements in its surroundings.
Data Exploration:
Need for Visualising Data : Data visualisation helps us in the following ways:
It simplifies complex data; thus, making it easier to comprehend.
It helps gain a deeper understanding of the trends, relationships, and patterns
present within the data.
It uncovers hidden relationships or anomalies (odd behaviour) that may not be
immediately apparent.
It helps us in selecting models for the subsequent AI Project Cycle stage.
various types of graphical representations:
Bar chart, Line Chart, Flow Chart, Tree Diagram, Scatter Plot etc.
Modelling:
Artificial Intelligence, or AI, refers to any technique that enables computers to mimic
human intelligence. The AI-enabled machines think algorithmically and execute what they
have been asked for intelligently.
Machine Learning, or ML, enables machines to improve at tasks with experience. The
machine learns from its mistakes and takes them into consideration in the next execution. It
improvises itself using its own experiences.
Deep Learning, or DL, enables software to train itself to perform tasks with vast amounts of
data. In deep learning, the machine is trained with huge amounts of data which helps it into
training itself around the data. Such machines are intelligent enough to develop algorithms
for themselves.
Generally, AI models can be classified as follows:
The following are the differences between rule-based and learning-based approaches.
Evaluation:
When evaluating the efficiency of an AI model, the following four parameters are commonly
considered:
Deployment:
Key Steps in the Deployment Process
Testing and validation: It involves thorough testing and validation of the AI model to ensure
it performs accurately and reliably under various conditions.
Integration with systems: It includes integrating the model with existing systems to ensure
seamless operation within the current infrastructure.
Monitoring and maintenance: It encompasses ongoing monitoring and maintenance of the
deployed model to ensure its performance remains optimal and to address any issues that
arise.
Ethics and Morality:
AI Ethics Principles:
The following principles in AI Ethics affect the quality of AI solutions
▪ Human Rights ▪ Bias ▪ Privacy ▪ Inclusion
Human Rights :
When building AI solutions, we need to ensure that they follow human rights.
Bias :
Bias(partiality or preference for one over the other) often comes from the collected data.
The bias in training data also appears in the results.
Privacy :
We need to have rules which keep our individual and private data safe.
Inclusion : AI MUST NOT discriminate against a particular group of population, causing them
any kind of disadvantage.