oracle 1z0-1127-25 practice test

Oracle Cloud Infrastructure 2025 Generative AI Professional

Last exam update: Nov 18 ,2025
Page 1 out of 6. Viewing questions 1-15 out of 88

Question 1

What is the role of temperature in the decoding process of a Large Language Model (LLM)?

  • A. To increase the accuracy of the most likely word in the vocabulary
  • B. To determine the number of words to generate in a single decoding step
  • C. To decide to which part of speech the next word should belong
  • D. To adjust the sharpness of probability distribution over vocabulary when selecting the next word
Mark Question:
Answer:

D


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Temperature is a hyperparameter in the decoding process of LLMs that controls the randomness of
word selection by modifying the probability distribution over the vocabulary. A lower temperature
(e.g., 0.1) sharpens the distribution, making the model more likely to select the highest-probability
words, resulting in more deterministic and focused outputs. A higher temperature (e.g., 2.0) flattens
the distribution, increasing the likelihood of selecting less probable words, thus introducing more
randomness and creativity. Option D accurately describes this role. Option A is incorrect because
temperature doesn’t directly increase accuracy but influences output diversity. Option B is unrelated,
as temperature doesn’t dictate the number of words generated. Option C is also incorrect, as part-of-
speech decisions are not directly tied to temperature but to the model’s learned patterns.
: General LLM decoding principles, likely covered in OCI 2025 Generative AI documentation under
decoding parameters like temperature.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

Which statement accurately reflects the differences between these approaches in terms of the
number of parameters modified and the type of data used?

  • A. Fine-tuning and continuous pretraining both modify all parameters and use labeled, task-specific data.
  • B. Parameter Efficient Fine-Tuning and Soft Prompting modify all parameters of the model using unlabeled data.
  • C. Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data.
  • D. Soft Prompting and continuous pretraining are both methods that require no modification to the original parameters of the model.
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Fine-tuning typically involves updating all parameters of an LLM using labeled, task-specific data to
adapt it to a specific task, which is computationally expensive. Parameter Efficient Fine-Tuning
(PEFT), such as methods like LoRA (Low-Rank Adaptation), updates only a small subset of parameters
(often newly added ones) while still using labeled, task-specific data, making it more efficient. Option
C correctly captures this distinction. Option A is wrong because continuous pretraining uses
unlabeled data and isn’t task-specific. Option B is incorrect as PEFT and Soft Prompting don’t modify
all parameters, and Soft Prompting typically uses labeled examples indirectly. Option D is inaccurate
because continuous pretraining modifies parameters, while SoftPrompting doesn’t.
: OCI 2025 Generative AI documentation likely discusses Fine-tuning and PEFT under model
customization techniques.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

What is prompt engineering in the context of Large Language Models (LLMs)?

  • A. Iteratively refining the ask to elicit a desired response
  • B. Adding more layers to the neural network
  • C. Adjusting the hyperparameters of the model
  • D. Training the model on a large dataset
Mark Question:
Answer:

A


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Prompt engineering involves crafting and refining input prompts to guide an LLM to produce desired
outputs without altering its internal structure or parameters. It’s an iterative process that leverages
the model’s pre-trained knowledge, making Option A correct. Option B is unrelated, as adding layers
pertains to model architecture design, not prompting. Option C refers to hyperparameter tuning
(e.g., temperature), not prompt engineering. Option D describes pretraining or fine-tuning, not
prompt engineering.
: OCI 2025 Generative AI documentation likely covers prompt engineering in sections on model
interaction or inference.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

  • A. The model's ability to generate imaginative and creative content
  • B. A technique used to enhance the model's performance on specific tasks
  • C. The process by which the model visualizes and describes images in detail
  • D. The phenomenon where the model generates factually incorrect information or unrelated content as if it were true
Mark Question:
Answer:

D


Explanation:
Comprehensive and Detailed In-Depth Explanation=
In LLMs, "hallucination" refers to the generation of plausible-sounding but factually incorrect or
irrelevant content, often presented with confidence. This occurs due to the model’s reliance on
patterns in training data rather than factual grounding, making Option D correct. Option A describes
a positive trait, not hallucination. Option B is unrelated, as hallucination isn’t a performance-
enhancing technique. Option C pertains to multimodal models, not the general definition of
hallucination in LLMs.
: OCI 2025 Generative AI documentation likely addresses hallucination under model limitations or
evaluation metrics.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

What does in-context learning in Large Language Models involve?

  • A. Pretraining the model on a specific domain
  • B. Training the model using reinforcement learning
  • C. Conditioning the model with task-specific instructions or demonstrations
  • D. Adding more layers to the model
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
In-context learning is a capability of LLMs where the model adapts to a task by interpreting
instructions or examples provided in the input prompt, without additional training. This leverages
the model’s pre-trained knowledge, making Option C correct. Option A refers to domain-specific
pretraining, not in-context learning. Option B involves reinforcement learning, a different training
paradigm. Option D pertains to architectural changes, not learning via context.
: OCI 2025 Generative AI documentation likely discusses in-context learning in sections on prompt-
based customization.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

What is the purpose of embeddings in natural language processing?

  • A. To increase the complexity and size of text data
  • B. To translate text into a different language
  • C. To create numerical representations of text that capture the meaning and relationships between words or phrases
  • D. To compress text data into smaller files for storage
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Embeddings in NLP are dense, numerical vectors that represent words, phrases, or sentences in a
way that captures their semantic meaning and relationships (e.g., "king" and "queen" being close in
vector space). This enables models to process text mathematically, making Option C correct. Option
A is false, as embeddings simplify processing, not increase complexity. Option B relates to
translation, not embeddings’ primary purpose. Option D is incorrect, as embeddings aren’t primarily
for compression but for representation.
: OCI 2025 Generative AI documentation likely covers embeddings under data preprocessing or
vector databases.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

What is the main advantage of using few-shot model prompting to customize a Large Language
Model (LLM)?

  • A. It allows the LLM to access a larger dataset.
  • B. It eliminates the need for any training or computational resources.
  • C. It provides examples in the prompt to guide the LLM to better performance with no training cost.
  • D. It significantly reduces the latency for each model request.
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Few-shot prompting involves providing a few examples in the prompt to guide the LLM’s behavior,
leveraging its in-context learning ability without requiring retraining or additional computational
resources. This makes Option C correct. Option A is false, as few-shot prompting doesn’t expand the
dataset. Option B overstates the case, as inference still requires resources. Option D is incorrect, as
latency isn’t significantly affected by few-shot prompting.
: OCI 2025 Generative AI documentation likely highlights few-shot prompting in sections on efficient
customization.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?

  • A. GPUs are shared with other customers to maximize resource utilization.
  • B. The GPUs allocated for a customer’s generative AI tasks are isolated from other GPUs.
  • C. GPUs are used exclusively for storing large datasets, not for computation.
  • D. Each customer's GPUs are connected via a public Internet network for ease of access.
Mark Question:
Answer:

B


Explanation:
Comprehensive and Detailed In-Depth Explanation=
In Dedicated AI Clusters (e.g., in OCI), GPUs are allocated exclusively to a customer for their
generative AI tasks, ensuring isolation for security, performance, and privacy. This makes Option B
correct. Option A describes shared resources, not dedicated clusters. Option C is false, as GPUs are
for computation, not storage. Option D is incorrect, as public Internet connections would
compromise security and efficiency.
: OCI 2025 Generative AI documentation likely details GPU isolation under DedicatedAI Clusters.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

What happens if a period (.) is used as a stop sequence in text generation?

  • A. The model ignores periods and continues generating text until it reaches the token limit.
  • B. The model generates additional sentences to complete the paragraph.
  • C. The model stops generating text after it reaches the end of the current paragraph.
  • D. The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher.
Mark Question:
Answer:

D


Explanation:
Comprehensive and Detailed In-Depth Explanation=
A stop sequence in text generation (e.g., a period) instructs the model to halt generation once it
encounters that token, regardless of the token limit. If set to a period, the model stops after the first
sentence ends, making Option D correct. Option A is false, as stop sequences are enforced. Option B
contradicts the stop sequence’s purpose. Option C is incorrect, as it stops at the sentence level, not
paragraph.
: OCI 2025 Generative AI documentation likely explains stop sequences under text generation
parameters.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

What is the purpose of frequency penalties in language model outputs?

  • A. To ensure that tokens that appear frequently are used more often
  • B. To penalize tokens that have already appeared, based on the number of times they have been used
  • C. To reward the tokens that have never appeared in the text
  • D. To randomly penalize some tokens to increase the diversity of the text
Mark Question:
Answer:

B


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Frequency penalties reduce the likelihood of repeating tokens that have already appeared in the
output, based on their frequency, to enhance diversity and avoid repetition. This makes Option B
correct. Option A is the opposite effect. Option C describes a different mechanism (e.g., presence
penalty in some contexts). Option D is inaccurate, as penalties aren’t random but frequency-based.
: OCI 2025 Generative AI documentation likely covers frequency penalties under output control
parameters.
Below is the next batch of 10 questions (11–20) from your list, formatted as requested with detailed
explanations. These answers are based on widely accepted principles in generative AI and Large
Language Models (LLMs), aligned with what is likely reflected in the Oracle Cloud Infrastructure (OCI)
2025 Generative AI documentation. Typographical errors have been corrected for clarity.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented
Generation (RAG)?

  • A. They always use an external database for generating responses.
  • B. They rely on internal knowledge learned during pretraining on a large text corpus.
  • C. They cannot generate responses without fine-tuning.
  • D. They use vector databases exclusively to produce answers.
Mark Question:
Answer:

B


Explanation:
Comprehensive and Detailed In-Depth Explanation=
LLMs without Retrieval Augmented Generation (RAG) depend solely on the knowledge encoded in
their parameters during pretraining on a large, general text corpus. They generate responses
basedon this internal knowledge without accessing external data at inference time, making Option B
correct. Option A is false, as external databases are a feature of RAG, not standalone LLMs. Option C
is incorrect, as LLMs can generate responses without fine-tuning via prompting or in-context
learning. Option D is wrong, as vector databases are used in RAG or similar systems, not in basic
LLMs. This reliance on pretraining distinguishes non-RAG LLMs from those augmented with real-time
retrieval.
: OCI 2025 Generative AI documentation likely contrasts RAG and non-RAG LLMs under model
architecture or response generation sections.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

What do embeddings in Large Language Models (LLMs) represent?

  • A. The color and size of the font in textual data
  • B. The frequency of each word or pixel in the data
  • C. The semantic content of data in high-dimensional vectors
  • D. The grammatical structure of sentences in the data
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Embeddings in LLMs are high-dimensional vectors that encode the semantic meaning of words,
phrases, or sentences, capturing relationships like similarity or context (e.g., "cat" and "kitten" being
close in vector space). This allows the model to process and understand text numerically, making
Option C correct. Option A is irrelevant, as embeddings don’t deal with visual attributes. Option B is
incorrect, as frequency is a statistical measure, not the purpose of embeddings. Option D is partially
related but too narrow—embeddings capture semantics beyond just grammar.
: OCI 2025 Generative AI documentation likely discusses embeddings under data representation or
vectorization topics.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

What is the function of the Generator in a text generation system?

  • A. To collect user queries and convert them into database search terms
  • B. To rank the information based on its relevance to the user's query
  • C. To generate human-like text using the information retrieved and ranked, along with the user's original query
  • D. To store the generated responses for future use
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
In a text generation system (e.g., with RAG), the Generator is the component (typically an LLM) that
produces coherent, human-like text based on the user’s query and any retrieved information (if
applicable). It synthesizes the final output, making Option C correct. Option A describes a Retriever’s
role. Option B pertains to a Ranker. Option D is unrelated, as storage isn’t the Generator’s function
but a separate system task. The Generator’s role is critical in transforming inputs into natural
language responses.
: OCI 2025 Generative AI documentation likely defines the Generator under RAG or text generation
workflows.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

What differentiates Semantic search from traditional keyword search?

  • A. It relies solely on matching exact keywords in the content.
  • B. It depends on the number of times keywords appear in the content.
  • C. It involves understanding the intent and context of the search.
  • D. It is based on the date and author of the content.
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
Semantic search uses embeddings and NLP to understand the meaning, intent, and context behind a
query, rather than just matching exact keywords (as in traditional search). This enables more relevant
results, even if exact terms aren’t present, making Option C correct. Options A and B describe
traditional keyword search mechanics. Option D is unrelated, as metadata like date or author isn’t
the primary focus of semantic search. Semantic search leverages vector representations for deeper
understanding.
: OCI 2025 Generative AI documentation likely contrasts semantic and keyword search under search
or retrieval sections.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

What does the Ranker do in a text generation system?

  • A. It generates the final text based on the user's query.
  • B. It sources information from databases to use in text generation.
  • C. It evaluates and prioritizes the information retrieved by the Retriever.
  • D. It interacts with the user to understand the query better.
Mark Question:
Answer:

C


Explanation:
Comprehensive and Detailed In-Depth Explanation=
In systems like RAG, the Ranker evaluates and sorts the information retrieved by the Retriever (e.g.,
documents or snippets) based on relevance to the query, ensuring the most pertinent data is passed
to the Generator. This makes Option C correct. Option A is the Generator’s role. Option B describes
the Retriever. Option D is unrelated, as the Ranker doesn’t interact with users but processes
retrieved data. The Ranker enhances output quality by prioritizing relevant content.
: OCI 2025 Generative AI documentation likely details the Ranker under RAG pipeline components.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2