1) in one shot prompting primary goal is
Fine-tune a pre-trained model using a single example
2) which of the following approaches is best suited for optimizing prompts to
ensure that a language
model generates responses that are both concise and contextually relevant?
iteratively refining the prompt based on generated outputs
3) when optimizing prompts for generating structured outputs (like json) which of
the following modification can significantly
improve the models accuracy in producing the desired structure?
Adding explicit instruction to the prompt
4) which of the following statements is true about the licensing of open-source
LLMs?
Open-source LLM can have a variety of licenses some of which may impose specific
usage restrictions
5) in context of preventing hallucinations in generative AI models, what does model
distillation refer to?
Reducing the model size by approximating a larger model
6) in the context of fine-tuning a LLM for a specific application , why might one
opt to use a lower temperature setting during
interference?
To encourage the generation of highly diverse outputs
7) what is a potential drawback of few-short prompting that practitioners should be
aware of?
Risk of overfitting to the examples in the prompt
8) Which of the following strategies is most effective for reducing the length of
responses generated by a language model without
significantly compromising on the quality of the response?
Setting a maximum token limit for the response
9) How does a high temperature value affect the probability distribution of next
token in LLM outputs?
it flattens the distribution , making low-probability tokens more likely
10) Which of the following strategies is least effective in reducing hallucinations
in language models?
Using a smaller dataset for training
11) A generative AI used for educational content sometimes includes outdated
information. What methods
can address this?
regular updates with the latest academic research
cross-verifying with up to date references
incorporate a feedback mechanism for educators
12) what practice would help reduce hallucinations in an LLM giving factual advice?
providing specific source reference in prompts
13) in a customer recommendation system how can hallucination errors be minimized?
process real customer feedback in updates
Ensure realistic constraints in prompts
14) Which of the following scenarios would most benefit from using a higher
temperature setting for an LLM?
Generating poetry or creative writing
15) which scenario best exemplifies the user of one-shot prompting?
Giving one example of a complex task and expecting the model to generalize
16) How can developers ensure generative AI avoids spreading misinformation?
Using current and reliable sources
Regularly updating the model with new training data
17) What is the primary issue with the "bias amplification" phenomenon in AI
Systems?
it leads to the reinforcement and exaggeration of existing biases in the data
18) What steps can be taken to ensure LLMs provide culturally sensitive outputs?
curate culture-specific datasets with diverse perpectives
employ region-specific context in prompts
validate outputs by culture experts
19) which of the following is a key difference between the development communities
of open-source and
closed-source LLMs?
Open-source communities typically involve contributions from a wide range of
independent developers and organizations
20) in what ways can the efficacy of prompts in multilingual models be improved?
Applying launguage-specific nuances
Providing examples in multiple languages