0% found this document useful (0 votes)
2 views9 pages

What Is Learning?

Learning encompasses increasing knowledge, applying it, and embracing lifelong education, while AI, ML, and DL are interconnected fields in computer science with distinct roles. AI simulates human intelligence, ML focuses on systems that learn from data, and DL specializes in deep neural networks for complex tasks. Applications of ML span various domains, including image recognition, natural language processing, and healthcare, while challenges such as ill-posed problems and overfitting can hinder effective model performance.

Uploaded by

20235203067
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views9 pages

What Is Learning?

Learning encompasses increasing knowledge, applying it, and embracing lifelong education, while AI, ML, and DL are interconnected fields in computer science with distinct roles. AI simulates human intelligence, ML focuses on systems that learn from data, and DL specializes in deep neural networks for complex tasks. Applications of ML span various domains, including image recognition, natural language processing, and healthcare, while challenges such as ill-posed problems and overfitting can hinder effective model performance.

Uploaded by

20235203067
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

What is learning?

+ Increasing knowledge
+ Memorising wha has been learn
+ Applying and using knowledge
+ Undersanding wha has o be learned
+ Seeing hings differenly
+ Building social compeence
+ Embracing learning as somehing which is lie-long and lie-
wide
+ Changing as a person
Arficial Inelligence (AI), Machine Learning (ML), and Deep Learning (DL) are inerconneced
fields in compuer science, each wih disnc roles:

• Arficial Inelligence (AI) is he broades concep reerring o sysems or machines


designed o simulae human inelligence, including reasoning, learning, percepon, and
problem-solving, decision-making. I encompasses any echnique enabling compuers o
emulae human behavior, such as reasoning, planning, or naural language processing.
Examples include chabos, recommendaon sysems, or auonomous vehicles.

• Machine Learning (ML) is a branch o arficial inelligence (AI) ha ocuses on building
sysems ha can learn rom daa, ideny paerns, and make decisions wih minimal
human inervenon. In essence, machine learning allows compuers o "learn" rom pas
experiences (daa) and improve heir perormance over me wihou being explicily
programmed or every possible scenario. Common echniques include supervised
learning (e.g., regression, classificaon), unsupervised learning (e.g., clusering), and
reinorcemen learning. Applicaons include spam deecon or image recognion.

• Deep Learning (DL) is a specialized subse o ML ocused on neural neworks wih many
layers (deep neural neworks). DL excels a complex asks like image recognion, naural
language processing, and auonomous driving due o is abiliy o learn hierarchical
eaure represenaons rom large daases.
Machine Learning: Applicaons
Image Recognion:
Facial Recognion: Used in smarphones, social media plaorms (e.g., Facebook phoo agging),
and securiy sysems. ML algorihms can deec and recognize human aces in images or videos.

Medical Imaging: ML is used in analyzing X-rays, MRIs, and CT scans o ideny abnormalies
such as umors, racures, or oher diseases.

Objec Deecon: Used in auonomous vehicles, reail (e.g., Amazon Go sores), and securiy
surveillance sysems.

Speech and Voice Recognion:


Virual Assisans: ML powers virual assisans like Siri, Alexa, and Google Assisan, enabling
hem o recognize and respond o voice commands.

Speech-o-Tex: Applicaons like Google Voice Typing and auomaed ranscripon services
conver spoken words ino wrien ex.

Voice Biomerics: Used in securiy sysems or idenying individuals based on heir voice.

Naural Language Processing (NLP):


Chabos and Cusomer Suppor: ML-driven chabos can undersand and respond o cusomer
queries, improving cusomer service or companies like banks, e-commerce plaorms, and
service providers.

Language Translaon: Google Translae and oher language ranslaon apps use ML models o
ranslae ex and speech beween differen languages.

Senmen Analysis: Businesses use ML o analyze cusomer reviews, social media poss, and
oher ex daa o deermine public senmen or opinions abou producs or services.
Recommendaon Sysems:
E-commerce: Plaorms like Amazon and Alibaba use ML o recommend producs based on a
user’s browsing hisory, purchase behavior, and preerences.

Sreaming Services: Nelix, Spoy, and YouTube recommend movies, music, and videos by
analyzing user ineracons, viewing hisory, and preerences.

Social Media: Facebook, Twier, and Insagram use recommendaon algorihms o sugges
riends, poss, or conen ha align wih a user’s ineress.

Fraud Deecon:
Banking and Finance: ML algorihms analyze ransacon paerns o deec raudulen acvies,
such as credi card raud, money laundering, or accoun akeovers.

Insurance: Insurers use ML o deec raudulen insurance claims by spotng unusual paerns or
behaviors in claim submissions.

Auonomous Vehicles:
Self-Driving Cars: Companies like Tesla, Waymo, and Uber use ML o help auonomous vehicles
inerpre heir surroundings (e.g., deecng pedesrians, road signs, and oher cars), make driving
decisions, and navigae saely.

Drone Navigaon: ML is used in drones or roue planning, objec deecon, and collision
avoidance.

Healhcare and Medical Diagnosis:


Disease Predicon: ML models help predic he likelihood o diseases such as diabees, hear
disease, or cancer by analyzing paen daa.

Personalized Medicine: Machine learning enables he developmen o personalized reamen


plans based on a paen's genecs, medical hisory, and curren healh daa.

Drug Discovery: ML acceleraes he process o discovering new drugs by analyzing daa rom
clinical rials and idenying poenal drug candidaes.

• Finance and Sock Marke Analysis


• Markeng and Adversing
• Supply Chain Opmizaon
• Gaming, Robocs, Cybersecuriy
• Personalizaon
• Energy Efficiency
ill-Posed Problems in Machine Learning
An ill-posed problem, in he conex o ML, reers o a problem ha violaes one or more o he
condions or being well-posed, as defined by Jacques Hadamard:

1. A soluon exiss.
2. The soluon is unique.
3. The soluon is sable (small changes in inpu lead o small changes in oupu).
In ML, ill-posed problems ofen arise when he daa or model seup leads o ambiguiy, non-
uniqueness, or insabiliy in soluons, making i hard o achieve reliable predicons.

Relevance o ML (Cancer Deecon Example)

In cancer deecon rom medical imaging:

• Exisence: A soluon may exis, bu he complexiy o medical images (e.g., suble umor
paerns) makes i hard o guaranee he model will find i.

• Uniqueness: Mulple models or parameer setngs migh produce similar resuls, bu i’s
unclear which is opmal. For insance, differen CNN archiecures may deec umors
wih comparable accuracy bu ocus on differen image eaures.

• Sabiliy: Small changes in inpu images (e.g., noise, variaons in imaging equipmen, or
paen demographics) can lead o drascally differen predicons i he model isn’
robus.

Causes of Ill-Posed Problems in ML

1. Insufficient Data:
o Small or non-representative datasets fail to capture the full variability of the
problem. For instance, a dataset of 1,000 mammograms may not include enough
cases of rare tumor types, leading to incomplete learning.
o Lack of diversity (e.g., images from a single demographic) can cause the model to
miss generalizable patterns.
2. High Dimensionality:
o ML problems often involve high-dimensional data (e.g., medical images with
millions of pixels). This increases the risk of overfitting or finding multiple
solutions that fit the data equally well.
o In cancer detection, the high number of features (pixels) compared to the number
of samples can make the problem underdetermined.
3. Noisy or Ambiguous Data:
o Real-world data often contains noise (e.g., artifacts in medical images from
equipment) or ambiguous patterns (e.g., benign and malignant tissues with similar
appearances), making it hard to define a unique solution.
o For example, subtle differences between cancerous and non-cancerous regions
may be indistinguishable without additional context.
4. Ill-Defined Problem Formulation:
o Poorly defined objectives or labels can exacerbate ill-posedness. For instance, if
"cancer" labels in a dataset are inconsistent due to human error, the model may
learn incorrect patterns.
o In regression tasks (e.g., predicting tumor size), the relationship between inputs
and outputs may be inherently ambiguous due to biological variability.
5. Non-Linear and Complex Relationships:
o Many ML problems involve complex, non-linear relationships that are difficult to
model accurately. In cancer detection, the relationship between pixel patterns and
malignancy is highly non-linear, increasing the risk of instability.

Overfitng in Machine Learning


Overfitng is a crical challenge in machine learning (ML) where a model learns he raining daa
oo well, including is noise and ouliers, resulng in poor perormance on new, unseen daa. This
leads o a model ha is overly complex and ails o generalize o real-world scenarios

Relevance o ML (Cancer Deecon Example)

In cancer deecon:

• A CNN migh memorize specific pixel paerns in he raining mammograms, including
irrelevan deails like imaging aracs or paen-specific noise.

• When esed on new images (e.g., rom a differen hospial or machine), he model may
ail o correcly classiy umors, leading o low sensiviy or specificiy.

Causes of Overfitng

1. Complex Models:

o Models wih oo many parameers (e.g., deep neural neworks wih millions o
weighs) can fi he raining daa excessively, capuring noise insead o general
paerns.

o Example: A CNN wih 50 layers migh overfi a small daase o 1,000


mammograms by learning irrelevan pixel paerns.

2. Insufficien Daa:

o Small or non-diverse daases limi he model’s abiliy o learn generalizable


paerns.
o Example: I a cancer deecon daase has ew malignan cases or is sourced rom
one hospial, he model may overfi o specific imaging condions.

3. Noisy Daa:

o Real-world daa ofen conains noise (e.g., imaging aracs in medical scans or
mislabeled daa), which he model may misakenly rea as meaningul.

o Example: A CNN migh learn o associae scanner-specific noise wih cancer,


leading o alse posives on cleaner images.

4. Lack of Regularizaon:

o Wihou consrains, models can become overly flexible, fitng he raining daa
oo closely.

o Example: A CNN wihou dropou or weigh penales may priorize irrelevan


eaures in mammograms.

5. Imbalanced Daa:

o I he daase is skewed (e.g., 90% benign and 10% malignan cases), he model
may overfi o he majoriy class, neglecng minoriy paerns.

o Example: A model migh achieve high accuracy by predicng mos cases as benign,
missing crical malignan cases.

6. Overraining:

o Training a model or oo many epochs can cause i o memorize he raining daa
raher han learn general paerns.

o Example: Afer 50 epochs, a CNN migh sar fitng noise in mammograms,


reducing is abiliy o generalize.

Implicaons of Overfitng
• Poor Generalizaon: The model perorms well on raining daa bu poorly on new daa,
liming is praccal uliy.

o Example: A cancer deecon model wih 95% raining accuracy bu 70% es
accuracy ails o reliably deec umors in clinical setngs.

• Reduced Trus: In applicaons like healhcare, overfitng can lead o alse posives or
negaves, eroding confidence in he model.
• Resource Wase: Overfied models require more compuaonal resources and me o
rain, wih diminishing reurns on perormance.

Soluons o Migae Overfitng

1. Regularizaon:

o Adds consrains o reduce model complexiy and preven overfitng o noise.

o Techniques:

▪ L1/L2 Regularizaon: Penalizes large weighs in he model (e.g., adding a


erm o he loss uncon o discourage overly complex CNNs).

▪ Dropou: Randomly deacvaes a racon o neurons during raining,


orcing he model o learn robus eaures.

o Example: Applying dropou (e.g., 50% neuron dropou rae) o a CNN or cancer
deecon prevens reliance on specific pixel paerns.

2. Daa Augmenaon:

o Increases daase size and diversiy by generang synhec variaons o he


raining daa.

o Example: For mammograms, augmenaons like roaon, flipping, zooming, or


adjusng brighness help he model generalize o varied imaging condions.

3. More Daa:

o Collecng larger, more diverse daases ensures he model learns represenave
paerns.

o Example: Including mammograms rom mulple hospials, demographics, and


imaging devices reduces he risk o overfitng o specific condions.

4. Cross-Validaon:

o Splis daa ino k-olds (e.g., 5-old cross-validaon) o evaluae model


perormance on mulple subses, ensuring robusness.

o Example: A cancer deecon model esed across five olds o diverse daa is less
likely o overfi han one rained on a single spli.

5. Early Sopping:
o Moniors validaon loss during raining and sops when i no longer improves,
prevenng he model rom memorizing raining daa.

o Example: Halng raining afer 10 epochs i validaon loss increases while raining
loss connues o drop.

6. Simpler Models:

o Using less complex archiecures reduces he risk o overfitng, especially wih
small daases.

o Example: Swiching rom a 50-layer CNN o a 10-layer CNN or using ranser


learning wih a pre-rained model like ResNe.

7. Transfer Learning:

o Fine-unes pre-rained models (e.g., rained on ImageNe) on specific asks,


leveraging general eaures o reduce overfitng.

o Example: Fine-uning a pre-rained ResNe on a small mammogram daase


improves generalizaon compared o raining rom scrach.

8. Daa Cleaning:

o Removing or correcng noisy or mislabeled daa improves he qualiy o he


raining se.

o Example: Veriying labels in a cancer daase o ensure malignan/benign cases are


accuraely annoaed.

9. Balanced Daases:

o Addressing class imbalance (e.g., using oversampling, undersampling, or synhec


daa generaon like SMOTE) ensures he model learns paerns rom all classes.

o Example: Generang synhec malignan mammograms o balance a daase wih


ew posive cases.

You might also like