prongt Enginesing
What is prompt Engneering ?
prompt enginsering isis all about Ciating
offe ct'ive instructions in natural language So
AT models (ihe chat GPT, claude, apt - 4)
Uvdexstoud avd respoud usefully t's
not
coding - it's albout wording aud stsucture
prowpt = lnstuction to Aî
" pOmpt be ao
can be quey, covtext role
Stotenet ,, Style
stye seque st , oY COmbination - it
gudes the Aî's output
it's Crucial
Customex
" Modem woTh across cortent, tech
SuppÑt, and move increa singly integrates AT.
A1
Crafting prompts effeckively otews A
to
Toles
ass ist intetligently in cuengay
Core Insiqhtsa
assistant -
"Think of A as a talented but new
to
it needs expict, unambigubus instnctions
act coecty teads
phrased pr mpt often
*A Vague or poorby
to off -target or unhelpful responses,
engineeving is about talking to n1 in
prompt
Smart and Stuctured way to get the xesut,
not Coding qou're comunicatig
2. LLM Settings (large Languaqe Medel onfiguratio
LLM Settings- these are parameters gou Confiqun
with LLHS Via AeT
when
intaracting OpenA
lugging ace)
these Settings shapes hou reative Or
consesvative the Aî responses Can be.
r a t u r e y L
i, Termpe
" Determines randomness io, An responses
. douer values = more determinstic & precise
OLns wers
Higher values = more Creative aVaried
ox
responses
Max
Tokenszsupst slpt2usrostot
ii,
" Set the length of the A1's otput
imit allow more detai led responses
Louoes limits keep outpusConcise
i, Top-pol
Ratthex than picking trom all totens chooses
from the snallest Set of Cumulati ve
ombines woitb temparature orfine tune
output diversity
iv, Prequenag penalty presence penalty
Frequeray'penatiy roduces repeot o 9tokens
precence penalt discouraqes introducing the
Same concept repea tecliy
wtoie Usefulfor
al for avoiding redurndant & Cireubr avs
v, Stop Sequences
to tell the model wnen
. predefined token (s)
- ensures clean, boun ded
to Stop generation
g
responses
core Insights 2
. Efective prompt erginenng isnt just about
wOsding - it extendsto houo Hou Contigure the
model parameters.
you can craft pzompts
with the oording but
L LM
8eal Cortrol ComeS when pYompt and
seting
choosing the right Settings depends
Use case i g
Tight Control = 1o0 temprature (Strong pehalies
Creative braiostoam = higher temperature
" iberal token Linit
Note:Josg
penalty
Dort use tequency penaty and presene
Same time
at the
the Temperature and Top P t
" We can
igh
time, butthe ecommerndation is
the Came
ojth the one
at Sarne tims
ony
engineexing is tuwo-fold: th
Learrirg Pprompt
and so do
natters
targuage of the prompt
the LLM Sethng
Prompt Element
i. Instuction:
twht we wJant
want as output
Ex: To Summarize he, Parog
Context
Summariars based on who i use the Cori
Ex: who will use tike Softuoareengineer,
Qata fnalyat
ii,Ioput dataAbysspsto
The data that we qive to the Modal
iv, output Indicotd:h
Settow we want the oulput tike în J sON, CSv
para graph.
Exampe
Summarize the foltouwing tezt fox a legal
tearm, output the resporse in pointers.
Data = Srput
to
poupo
4One- shot, Zexo - shot andFew-shot
.Kero-s hot No
prompeing
exarnplesaven- Tepties purely
on instnction.
Onershot A single xarmple is
.Fao-
gien.
Shot: Multiple examples Shown to quide
AT bebaviour.
i, Zero- Shot
prompting
a dire ct instructionno examples.
whs best fos Siople or frequent tasks
Ex: Translate to freneh. ..ithout demonstration
One- shot prompting
gou provide one illustrative example befo se
Svo che ta skn2ooessVs l o o e o
" Helps claify ambiguous siostsuctions
Ex: classifying sentiment with one example.
Few-shot prompting
you incude 2-5 examples in the prompt
Enables in -context earningi the model Sees
pattens and genesali aes.
nw wid and show how it should
Ex: Define a
be used inSentences)usoe
Coret n s i g k t s
Contenti matters the Aî task
Exanplesteach
D e fonat and tesponse styu, reducingsnal
,diversity,
*E xarnple design is, critical; the qualiy
bebavior
Ordex fonat infLuence model
For bettex res ults, ncudeboth the pslo3
que and its Solution f togical explanaton
as context
befoe asEing the nezt questin.
5self- Consistancy prompting
. self consistency builds on chain -of -Thought
CcoT) by gene sating multiple reasoning paths
t
fo the Same psompt and checing the most
Common fina answer
6
his Signiticantly boosts
AT eliability and accuTayishbcaor
Coobined with CoT ts oe of the most
tor taske
ro bust pvomptingStrate gies
requiing precise logi
seliabi lit - more
More easoning paths = better
accusate final ans er
9gPhpicuD
Out -of- Date dearning 257e
a me thod
Out- of- Date kearning psompting a
that helps n1 handle queries about topies
not Induded in it trained Knowledge
cut- off by incosporating extemal Concept
2 oT updates into the prompt,
pirovide the new information manually tite a
recert neuos Snippet, data point o Fact
directty in the prompt
This ensures the î bases itS answer on the
Supptied info which ensure relevance and
cossectness in thes outputs rather than
out dated training
impoves accuracy fos Stuctured tasks
" Cew shot impaoves accuracy
pexformance Can
without strong examples,
degrade
Template fomat Stant with
with task instru1ction
then examples then the new input
onginering goes beyond phrasing -itt
prempt feus
Via examples.
the A -Srot
aboutteaching. 9 leami
offers powertul in-contet
prompting
allowing the model to leaentask stnctun
and stue from youndemonstrations
4:Chain -of- Thought. CcoT Prompting
Ohen usinq oldex versions, need to poovide
both the previous logie question and its
explanation want the ne tt (og!
questions to answered Correcty
lbe
Means canlt juct heep feeding question s- A1
will usu ally fais unless it koouos the
cseningfrom the easlier steps.
chain of
Sith nwer LLHS model otteo ans wers
Covsectly oithout Seing the previous togi ca
explanation - can ust ppse a
question and
tt gives toqic + result in one Shot.
itchain - of-hought poormpting cffecti vely
teachesthe T bow ta think by shooing
Step- by - step reasoning. elps uoi th togical,
math or mutti- stap tasks cohese Context
is Ke
Core tnsighi igrgoyon
Role - based promptiha glves you stonger Control
Oves tone Structureand
and expestise level in AT
rasponses d oborn
.BySpecityina persona and Context ,the
tailorsitt cutput making it re aligned more
with eal- wold needs.
customer Support Seripts, Coachingass
Role play poompting9 method tohere ask
the A to adopt a SpecitRc role or persona
to Shape Content style and tone.
8 Retieval Augmented. Generation (RnG) poomptig
RAG retrieves elevant document Cfrom a
Knowledge base, APT ,or database) and incorporaag
them into a tox the LLM, enriching is
response context.
How it wors
. Extenal Data Souse:!colect documents or facts,
embed them, chunk as needed.
Vecto Sto to
RetzieNe, on auegie
Srsnofetehelevant Snippet
Prompt Augmentotin i Inject, retrieved. text
into the pa0mpt so the model bases its
genesation on it
Core losiahs
enhances reliabi lit - and is ore-e ffeetive
than
etraining or fine- tuning for nuo
data Scenax ios
I n t i g h t n t h v k r h p o
Core
.This technique is essential fo ensusinq
aceuBate eurrent responses on topicsti
news, evolving technologie S,0 tecent event
san ralying
Than solely on the model's
Rather
Pre- training you efitectiveg Supplement,
the model 's memony with fre sh Context
t7via prompt desig
Cut- ot-bate Aeanning prempting- a powerfuloit
Speod
approach to baing models up to
Cunent facts by embedding external updates
directly in the paompt.orepawo
7.
Role- play poomptin8
the Aî a vole Such as teacher, maTk eter
Assig
and then give the task ensuzing the response
matches the Persona s voice and expextise .
sThis aligns output to nee ds in Volce, tore
and Cortent.
A structured prcnpt mbdel
Role: The person g act as Teacher)
Task ohat AI should do (eg Suggest stuoy
Fosmat: oeliver style Ce-g bullets, tone)
clanitg and Consistency in An 's
Ensures clanito
responses.
prompt engineaxing still plays a key role:
oo Pompa nust cleaxty instuct how to 2fSe
Hthe setrieved Cortext wihout
i hout overhelming
the mo del with too much irselevant dota
RAG Combines6 extanal ratriual meckarnisns
oitt pompt engineering to hep AT
n
generate more aceurate up- to - date, and
Context -ich responses,hsehe
q poactical Retrieval - Aúgmented Generation ( RhG)
Real- w0ld RAG Ihklow:
fotch selevant documents o Snippets from
3PusippeG
exterma) Source ( database, APD,0r
5Oy1AsnugJonvsls29V9icttp42
knouo ledge base)
Inject those Sdet ieved Peces -i0to the
prompt befose oS king question
" This allows the language model to generate
a30unded in the Jatest Content,
use pra ctical RAGid.
vdens
Ensures tactua lty Co vect and up-te-date
rasponses, 2specially in domansohere
sh smodel Knoo ledqe might be stale 0
Bridges the gr betuwen a
triing
-in training and the external reality
Cove Insight
psompt ngineesing remains crucial in RAG
it's not enough to Suppy Context ; the prompt
ust guide hou the mo del uses the Cortext
must
voell- designed prompts improve coherence
and prevent ielevant Context from
tthe model
Confusing
Hands-on aspect of RAG, demonstrating how to
elevant infoam ation and
retieve
actally
iotegrat it ioto paompt for more
accurate
Aî outputs
1o.Advanced prompt Ergineering
To advance in prompt egineering
Auto-prompting: Model genercting it ouon promps or
input.
reasoning steps Ctite CoT) without manual
daugchaun: A framewotk to
build Aî apps by
togethes prompa, tools, memony and
chaining
extemal data Sousces.
staategy that Combines
Re tct; A pcmping
and Aeting allowing the model to
Reasoning
Step-by- Step and take
actions tike
think or calcu lations.
web Search
Knewle dge
Tsans foomer & Meta
- Promgting Uing
tsansfomers woxk to design
of how
help the model guide , plan,
p0 mpts that
or ref lect mote
effect'vely