Unit-6 Applications of AI
Unit-6 Applications of AI
Expert Systems
An expert system is a computer program that is designed to solve complex problems and to provide
decision-making ability like a human expert. It performs this by extracting knowledge from its
knowledge base using the reasoning and inference rules according to the user queries.
The data in the knowledge base is added by humans that are expert in a particular domain and this
software is used by a non-expert user to acquire some information.
An expert systems are meant to solve real problems which normally would require a specialized human
expert (such as a doctor).
It solves the most complex issue as an expert by extracting the knowledge stored in its knowledge
base.
The data in the knowledge base added by human expert that are expert in a particular field or domain
and this system is used by a non-expert user to acquire some information.
The system helps in decision making for complex problems using both facts and heuristics like a
human expert.
The performance of an expert system is based on the expert's knowledge stored in its knowledge base.
An expert system is AI software that uses knowledge stored in a knowledge base to solve problems that
would usually require a human expert thus preserving a human expert’s knowledge in its knowledge
base.
They can advise users as well as provide explanations to them about how they reached a particular
conclusion or advice.
It is widely used in many areas such as medical diagnosis, accounting, coding, games etc.
High Performance: The expert system provides high performance for solving any type of complex
problem of a specific domain with high efficiency and accuracy.
Understandable: It responds in a way that can be easily understandable by the user. It can take input in
human language and provides the output in the same way.
Highly responsive: ES provides the result for any complex query within a very short period of time.
Below is the block diagram that represents the working of an expert system:
1) User Interface:
It is an interface that helps a non-expert user to communicate with the expert system to find the
solution.
With the help of a user interface, the expert system interacts with the user, takes queries as an input in a
readable format, and passes it to the inference engine. After getting the response from the inference
engine, it displays the output to the user.
It is an interface that helps a non-expert user to communicate with the expert system to find a solution.
This module makes it possible for a non-expert user to interact with the expert system and find a
solution to the problem.
This allows users to interact with the expert system by providing input (symptoms, observations, etc.)
and receiving explanations or recommendations.
The function of the inference engine is to fetch the relevant knowledge from the knowledge base,
interpret it and to find a solution relevant to the user’s problem. It helps in deriving an error-free
solution of queries asked by the user.
The function of the inference engine is to fetch the relevant knowledge from the knowledge base,
interpret it and to find a solution relevant to the user’s problem.
The inference engine acquires the rules from its knowledge base and applies them to the known facts to
infer new facts. Inference engines can also include an explanation and debugging abilities.
Forward Chaining: It starts from the known facts and rules, and applies the inference rules to add
their conclusion to the known facts. Example chess game
Backward Chaining: It is a backward reasoning method that starts from the goal and works backward
to prove the known facts. Example dieses.
The knowledge base is a type of storage that stores knowledge acquired from the different experts of
the particular domain. It is considered as big storage of knowledge. The more the knowledge base, the
more precise will be the Expert System.
It is similar to a database that contains information and rules of a particular domain or subject. There
are two types of knowledge uses in knowledge base i.e. Static knowledge and Dynamic Knowledge.
This is the core of an expert system and stores the knowledge and information about the specific
domain. It can include facts, rules, relationships, and heuristics (rules of thumb) from human experts.
It represents the expertise of human experts.
The development of expert systems (ES) has come a long way since their inception in the 1970s. Here's a
breakdown of the key stages involved and the evolution of the field:
Early Days (1970s):
The birth of expert systems coincided with the rise of artificial intelligence (AI) research.
Early systems focused on capturing knowledge from human experts in specific domains like medicine,
engineering, and geology.
Knowledge representation was primarily rule-based, with experts providing IF-THEN rules that the
system could reason with.
MYCIN, a system for diagnosing bacterial infections, and DENDRAL, used for analyzing chemical
compounds, were pioneering examples.
Growth and Refinement (1980s):
The 1980s witnessed a surge in expert system development due to their perceived potential for
replicating human expertise.
Specialized tools and languages were developed to facilitate knowledge acquisition and inference
engine design.
Expert system shells emerged, providing a basic framework for building expert systems without
needing to program everything from scratch.
Applications expanded beyond diagnosis and troubleshooting to include tasks like financial planning,
equipment configuration, and insurance underwriting.
Challenges and Evolution (1990s and beyond):
Limitations of expert systems, such as the knowledge bottleneck (difficulty in acquiring and
maintaining vast knowledge bases) and inflexibility in dealing with new situations, became apparent.
DENDRAL: It was an artificial intelligence project that was made as a chemical analysis expert
system. It was used in organic chemistry to detect unknown organic molecules with the help of their
mass spectra and knowledge base of chemistry.
MYCIN: It was one of the earliest backward chaining expert systems that was designed to find the
bacteria causing infections like bacteraemia and meningitis. It was also used for the recommendation of
antibiotics and the diagnosis of blood clotting diseases.
PXDES: It is an expert system that is used to determine the type and level of lung cancer. To determine
the disease, it takes a picture from the upper body, which looks like the shadow. This shadow identifies
the type and degree of harm.
CaDeT: The CaDet expert system is a diagnostic support system that can detect cancer at early stages.
These technologies allow computers to analyze and process text or voice data, and to grasp their full
meaning, including the speaker’s or writer’s intentions and emotions.
It helps developers to organize knowledge for performing tasks such as translation, automatic
summarization, speech recognition, relationship extraction, and topic segmentation.
NLP powers many applications that use language, such as text translation, voice recognition, text
summarization, and chatbots.
NLP also helps businesses improve their efficiency, productivity, and performance by simplifying
complex tasks that involve language.
This is a widely used technology for personal assistants that are used in various business fields/areas.
This technology works on the speech provided by the user breaks it down for proper understanding and
processes it accordingly.
Natural Language Processing (NLP) is a field that combines computer science, human language, and
Artificial Intelligence to study how computers and humans communicate in natural language.
The goal of NLP is for computers to be able to interpret and generate human language.
Speech
Written Text
Components of NLP
Input Automated Speech NLU NLG Output
Recognition
Natural Language Understanding (NLU) helps the machine to understand and analyze human language.
NLU breaking down text into individual word and find out the actual meaning.
It also analyzing the grammatical structure of a sentence and eliminating common word in the sentence.
NLU mainly used in Business applications to understand the customer's problem in both spoken and
written language.
The first phase of NLP is the Lexical Analysis. Breaks down the raw text into smaller units called tokens.
These tokens are usually words, punctuation marks, or other meaningful units.
Example: "The cat sat on the mat." becomes ["The", "cat", "sat", "on", "the", "mat", "."]
It Analyzes the grammatical structure of the sentence, determining the relationships between words and
phrases. It Identifying the subject, verb, and object of the sentence. It helps for structuring the words in
sentence.
Example: Ram student is a (this is wrong sentence so this sentence rejected by syntactic analysis)
3. Semantic Analysis
It Focuses on the meaning of the words and phrases in the sentence. It aims to understand the context and
relationships between words to derive the overall meaning.
Example: Cold Sun (this sentence is not meaningful so this sentence rejected by semantic analysis).
4. Discourse Integration
This involves resolving ambiguities and understanding how sentences relate to each other. Discourse
Integration depends upon the sentences that proceeds it and also invokes the meaning of the sentences that
follow it.
In this step it tries to consider before and after sentences meaning of current sentence.
5. Pragmatic Analysis
Pragmatic is the fifth and last phase of NLP. Deals with the practical aspects of language use, such as
understanding the speaker's intent, the context of the conversation, and the social implications of the
language.
1. Question Answering
Question Answering focuses on building systems that automatically answer the questions asked by humans
in a natural language.
2. Spam Detection
Sentiment Analysis is also known as opinion mining. It is used on the web to analyse the
attitude, behaviour, and emotional state of the sender. This application is implemented through a
combination of NLP (Natural Language Processing) and statistics by assigning the values to the
text (positive, negative, or natural), identify the mood of the context (happy, sad, angry, etc.)
4. Machine Translation
Machine translation is used to translate text or speech from one natural language to another natural
language.
5. Spelling correction
Microsoft Corporation provides word processor software like MS-word, PowerPoint for the
spelling correction.
Speech recognition is used for converting spoken words into text. It is used in applications, such
as mobile, home automation, video recovery, dictating to Microsoft Word, voice biometrics, voice
user interface, and so on.
Example: virtual assistant [Google Home, Apple Siri], mobile app [Messenger, Whatapps] etc.
7. Chatbot
Implementing the Chatbot is one of the important applications of NLP. It is used by many
companies to provide the customer's chat services.
Machine Vision (or Computer Vision) refers to the ability of a machine or computer system to
interpret and understand visual information from the world.
It involves using AI to analyze images and videos, recognizing patterns, and making decisions based on
visual inputs.
It can be defined as a computer's ability to see and perceive the environment.
Machine vision is a combination of a variety of separate components into a single unit. The
components comprise a communication system, an optical system, a vision processing system, sensors,
and a lighting system.
Machine vision equipment includes Cameras, Software, Embedded system, Computation, Label
Verification, and Robots.
Machine vision is a technical tool that can be creatively applied to existing technologies in order to solve
problems in the real world.
In addition to being utilized more frequently in other fields like security, autonomous driving, food
production, packaging, logistics, and even in robots and drones, machine vision is growing in
popularity within contexts for industrial automation.
The capabilities of machine vision are being dramatically increased by the emerging field of deep
learning models for AI.
Machine vision is a powerful field of artificial intelligence (AI) that equips computers with the ability to
"see" and interpret the visual world. Here's a breakdown of key concepts and applications of machine
vision:
Core functionalities:
Image Acquisition: Capturing visual data using cameras or other imaging sensors.
Preprocessing: Preparing the image data for further processing, often involving techniques like noise
reduction, filtering, and image scaling.
Feature Extraction: Identifying and extracting relevant features from the image, such as edges,
shapes, colors, textures, or objects of interest.
Image Segmentation: Dividing the image into meaningful segments or regions, which can be
individual objects, parts of objects, or background areas.
Object Recognition and Classification: Identifying and classifying objects within the image based on
the extracted features.
The following equipment are often required for machine vision systems
Cameras: - In a machine vision system, the cameras serve as the main piece of equipment for inspecting
the object or item. A machine vision system can use a variety of cameras with various interfaces, pixels,
resolutions, and functions.
Smart Cameras: - A smart camera has decision-making and description-generating capabilities and has all
required communication connections and can connect to wifi or a server for quick image data transfer.
Software: - For operators to analyze and maintain a machine vision system, as well as program the
hardware's functionality, the software is needed to visualize the data and display what the cameras are
seeing.
Embedded Systems: - Also known as an imaging computer, embedded systems are directly connected to a
processing board. This combines all parts under one single board computer.
Robots: - Robots are to integrate with machine vision to boost productivity and precision, as well as to
perform more difficult jobs that can only be completed if the system tells the robot precisely where to
position the object.
By effectively combining these hardware and software components, machine vision systems can interpret
visual data and extract meaningful information, enabling automation, image analysis, and intelligent
decision-making across various industries.
1. Object Detection: - Object detection in AI is a computer vision technique that allows a computer to
identify and locate objects within an image or video. It goes beyond simple image classification, which
only tells you what is in an image, by also telling you where those objects are.
Object detection is about finding where objects are in an image and object recognition is about determining
what an object is.
Core functionalities:
Identifying Objects: The core objective is to determine if specific objects exist within an image or
video frame.
Localization: Accurately pinpointing the bounding boxes around the detected objects, specifying their
position and extent in the image.
Can Handle Multiple Objects: It can effectively detect and localize numerous objects of the same
class (e.g., multiple cars) or even objects from various categories (e.g., a person holding a dog) within a
single image/frame.
Self-driving Cars: Detecting and locating vehicles, pedestrians, traffic signs, and other objects on the
road for safe navigation.
Facial Recognition: Identifying people in images or videos for security purposes or social media
applications.
Image and Video Surveillance: Detecting suspicious activities or objects in video footage for security
purposes.
Manufacturing: Automating visual inspection tasks on production lines to detect defects or ensure
product quality.
Retail: Enabling self-checkout systems through object recognition at checkout or optimizing product
placement based on customer behavior analysis in stores
2. Object Recognition: - Object Recognition is an advanced computer vision task where AI identifies
and classifies objects in images or videos. It is closely related to object detection, but instead of just
detecting objects, it also labels them with specific names.
It's a fundamental task in artificial intelligence, enabling machines to "see" and understand the visual
world.
Core Functionalities:
Identification: Object recognition aims to not only detect the presence and location of an object (like
object detection) but also classify its category.
Assigning Labels: It assigns a label to the detected object, identifying its type (e.g., car, dog, person).
Can Include Additional Information: In some cases, it might even provide more specific details
about the recognized object, like the breed of a dog or the model of a car
Self-driving Cars: Beyond detection, recognizing objects like traffic signs and pedestrians is crucial
for autonomous vehicles to understand the environment and make informed decisions.
Facial Recognition: Object recognition is used to identify people in images or videos for security
purposes, social media applications, or even photo tagging.
Object detection and recognition continue to advance rapidly, driven by improvements in deep learning
techniques and the availability of large datasets.
These technologies are integral to many cutting-edge applications and are poised to have a significant
impact across various industries.
Image Segmentation
Image segmentation is a computer vision technique that involves dividing an image into multiple
segments or regions that belong to the same class.
It enables object detection and recognition in images and it allows for more detailed analysis of specific
image regions.
The primary objective of image segmentation is to divide an image into distinct regions or segments.
These segments can correspond to individual objects, parts of objects, or even the background.
1. Semantic Segmentation
Each pixel in the image is classified into a predefined category.
Does not differentiate between instances of the same object.
Example: Segmenting all cars in an image as a single class.
3. Panoptic Segmentation
Combines elements of both semantic and instance segmentation, providing a more detailed
understanding. It can not only classify pixels but also differentiate between object instances and
background regions.
Explainable AI [XAI] in computer vision refers to techniques and methods that make the decisions and
predictions of AI models, particularly those involving visual data, understandable to human.
XAI aims to address this issue by providing explanations for AI decisions, enhancing trust,
accountability, and the ability to diagnose errors.
As AI models, especially deep neural networks, have become more complex, their decisions often
appear as "black boxes" with little insight into how they arrive at specific conclusions.
2. Regulatory Compliance:
- Regulations like the EU’s GDPR emphasize the need for explain ability, requiring that automated
decisions be explainable to affected individuals.
- Explain ability can reveal biases in the model, allowing developers to address and mitigate unfair
treatment of certain groups.
1. Medical Imaging:
- Enhances trust in automated diagnostic tools by explaining why certain regions in an image are flagged
as indicative of disease. Helps radiologists understand AI predictions and make more informed decisions.
- Explains decisions made by autonomous vehicles, such as why an obstacle was detected or why a
particular path was chosen.
- Increases safety and accountability by providing insights into the vehicle’s perception and decision-
making processes.
- Helps in validating the AI system's accuracy and fairness in identifying suspicious activities.
- Enhances product recommendation systems by explaining why certain products are suggested, based on
visual similarities or features.
- Assist human inspectors in understanding AI assessments and improving quality assurance processes.
Explainable AI in computer vision is vital for building trust, ensuring accountability, and improving AI
systems. As AI continues to integrate into critical applications, the demand for transparency and explain
ability will only grow, driving further advancements in this field.
Applications of AI in Healthcare
Artificial Intelligence (AI) is revolutionizing healthcare by enhancing disease diagnosis, drug discovery,
personalized treatment, medical imaging, robotic surgeries, and hospital management. AI-driven
solutions improve accuracy, efficiency, and accessibility in medical services.
1. Disease Diagnosis
AI-powered models analyze medical images, patient records, and genetic data to improve
disease detection.
Key applications:
o Cancer Detection – AI identifies cancerous cells in MRI, CT scans, and X-rays.
o Diabetes & Cardiovascular Disease Prediction.
AI accelerates the discovery of new drugs and treatments, reducing costs and time.
Key applications:
o New Drug Development – AI predicts effective drug compounds.
o Virtual Drug Screening – Testing thousands of drug combinations before lab trials.
o Drug Repurposing – Identifying new uses for existing drugs.
Examples:
o DeepMind’s AlphaFold accurately predicts protein structures for drug development.
o AI helped accelerate COVID-19 vaccine and drug research.
3. Personalized Medicine
AI improves accuracy and speed in analyzing medical images like MRI, CT scans, and
ultrasounds.
Key applications:
o Breast Cancer Detection.
o Brain Tumor Identification.
o Lung Disease Diagnosis.
Examples:
o Google AI outperformed radiologists in detecting lung cancer from CT scans.
5. Robotic Surgery
AI chatbots provide health advice, symptom checking, and mental health support.
Key applications:
o Symptom Checker – Suggests potential conditions based on symptoms.
Challenges of AI in Healthcare
Applications of AI in Bioinformatics
Bioinformatics is an interdisciplinary field that combines biology, computer science, and AI to analyze
biological data. AI, especially machine learning (ML) and deep learning (DL), is revolutionizing
bioinformatics by automating complex data analysis, improving accuracy, and accelerating discoveries.
3. Personalized Medicine
AI predicts and tracks disease outbreaks using genetic and clinical data.
Applications:
o COVID-19 outbreak prediction – AI analyzed global virus spread.
o Antibiotic resistance monitoring – Identifying bacterial mutations that resist antibiotics.
Example: AI-driven models helped forecast COVID-19 mutations and spread patterns.
Challenges of AI in Bioinformatics
Big Data Complexity – Biological datasets (e.g., genomics) are massive and complex.
Applications of AI in Medicine
AI is revolutionizing the field of medicine by enhancing accuracy, efficiency, and accessibility in
healthcare. Here are some key applications of AI in medicine:
2. Personalized Medicine
AI helps create personalized treatment plans by analyzing a patient’s genetic data, medical history, and
lifestyle. It predicts which medications or treatments will be most effective for an individual, making
precision medicine more accessible.
AI is transforming healthcare by making medical services faster, more accurate, and accessible to a larger
population. Its continuous advancements promise a future of improved patient care and medical innovation.
AI models analyze patient records, genetic information, and lifestyle factors to predict the
likelihood of diseases such as diabetes, heart disease, Alzheimer’s, and cancer before symptoms
appear.
Example: Google’s DeepMind developed AI models to predict acute kidney injury (AKI) 48
hours before clinical diagnosis.
AI can analyze past hospital records and identify patients at high risk of readmission.
Hospitals use predictive models to improve discharge planning and post-hospital care.
AI analyzes real-time health reports, social media, and travel data to predict disease outbreaks.
Example: BlueDot detected early signs of COVID-19 before it became a global pandemic.
Predictive modeling helps create precision medicine by analyzing patient-specific factors and
recommending the most effective treatment.
Example: AI-powered models suggest chemotherapy plans for cancer patients based on their
genetic profile.
AI predicts abnormalities in X-rays, MRIs, and CT scans, improving early detection of conditions
like lung cancer, brain tumors, and fractures.
AI can predict sepsis risk by analyzing vital signs, lab results, and patient history, allowing early
intervention and reducing mortality rates.
AI models predict patient admissions, ICU occupancy, and equipment demand, helping
hospitals allocate resources effectively.
Example: During COVID-19, AI was used to forecast ICU bed shortages and ventilator needs.
AI analyzes speech patterns, social media activity, and physiological data to detect signs of
depression, anxiety, and suicide risk.
Example: AI chatbots like Woebot assist in early mental health intervention.
AI predicts how a patient will respond to medications and identifies potential side effects before
they occur.
Example: AI is used in clinical trials to predict patient outcomes and optimize drug formulations.
Predictive modeling in healthcare is transforming the industry by enabling early intervention, improving
patient care, and optimizing resources. As AI continues to evolve, its predictive capabilities will further
revolutionize disease prevention, diagnostics, and treatment planning.