oracle 1z0-1084-25 practice test

Oracle Cloud Infrastructure 2025 Developer Professional

Last exam update: Nov 18 ,2025
Page 1 out of 7. Viewing questions 1-15 out of 100

Question 1

Which statement about microservices is FALSE?

  • A. They are typically designed around business capabilities.
  • B. It is fairly common for them to communicate with HTTP.
  • C. Multiple microservices can run in one process.
  • D. They are independently deployable.
Mark Question:
Answer:

B


Explanation:
The correct answer is: "Multiple microservices can run in one process." The statement that is FALSE
about microservices is: "Multiple microservices can run in one process." Microservices are designed
to be independent and autonomous services that can be deployed, scaled, and managed separately.
Each microservice typically runs as its own separate process. This separation allows for loose
coupling and independent scalability of individual services. Running multiple microservices in a
single process would violate the fundamental principle of microservices architecture, which
emphasizes the isolation and independence of services. Combining multiple microservices into a
single process would tightly couple them, making it difficult to independently manage, scale, and
update each service. The other statements provided are true about microservices: It is fairly common
for microservices to communicate with HTTP: Microservices often communicate with each other
using lightweight protocols like HTTP/REST. This allows for interoperability, flexibility, and ease of
communication between microservices. Microservices are typically designed around business
capabilities: Microservices architecture advocates designing services based on specific business
capabilities or functionalities. Each microservice focuses on a specific business domain or task,
enabling modular and scalable development. They are independently deployable: Microservices are
designed to be independently deployable units. This means that each microservice can be
developed, tested, deployed, and updated separately without affecting other services. This flexibility
allows for faster development cycles and more efficient scalability.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

Which testing measure should be considered when using test cases that simultaneously validate a
deployment and perform a selected set of functional tasks?

  • A. Resource Utilization
  • B. Functionality
  • C. Scalability
  • D. Robust Deployment
  • E. Resiliency
Mark Question:
Answer:

D


Explanation:
The correct answer is: "Robust Deployment." When using test cases that simultaneously validate a
deployment and perform a selected set of functional tasks, the testing measure that should be
considered is "Robust Deployment." Robust Deployment refers to the ability of an application or
system to be deployed reliably and consistently, without errors or failures. It involves ensuring that
the deployment process is well-defined, automated, and able to handle different scenarios and
configurations. When conducting testing that combines the validation of deployment and functional
tasks, it is crucial to ensure that the deployment itself is robust. This means verifying that the
application or system can be successfully deployed and configured without encountering
deployment-related issues such as incorrect configurations, missing dependencies, or compatibility
problems. By considering "Robust Deployment" as a testing measure, you can evaluate the reliability
and effectiveness of the deployment process, ensuring that the application or system is deployed
correctly and ready to perform the selected set of functional tasks.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 3

What is the open source engine for Oracle Functions?

  • A. Fn Project
  • B. Apache OpenWhisk
  • C. OpenFaas
  • D. Knative
Mark Question:
Answer:

A


Explanation:
The Fn Project is an open source serverless computing platform that serves as the engine for Oracle
Functions. It provides a runtime environment for executing functions in a serverless architecture. The
Fn Project enables developers to build and deploy functions using different programming languages,
including Java, Python, Node.js, and more. The Fn Project is designed to be flexible and extensible,
allowing developers to define functions as small units of code and execute them in response to
events or triggers. It supports event-driven execution, allowing functions to be triggered by various
events such as HTTP requests, messages from messaging systems, or changes in data. Oracle
Functions leverages the Fn Project as its underlying engine, providing a managed serverless platform
within the Oracle Cloud Infrastructure. It allows developers to easily develop, deploy, and manage
functions using the familiar Fn Project framework and tooling. With Oracle Functions, developers can
focus on writing their function code while leaving the infrastructure management, scaling, and
operational tasks to the platform.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

Which feature is typically NOT associated with Cloud Native?

  • A. Immutable Infrastructure
  • B. Declarative APIs
  • C. Containers
  • D. Application Servers
  • E. Service Meshes
Mark Question:
Answer:

D


Explanation:
The feature that is typically NOT associated with Cloud Native is "Application Servers." Cloud Native
architecture emphasizes lightweight, scalable, and containerized deployments, which often replace
traditional monolithic application servers. Instead of relying on application servers, Cloud Native
applications are typically deployed as containerized microservices that can be orchestrated and
managed using container orchestration platforms like Kubernetes. This approach enables greater
flexibility, scalability, and agility in deploying and managing applications. While application servers
have been widely used in traditional application architectures, they are not a characteristic feature of
Cloud Native architectures. Cloud Native architectures focus on containerization, declarative APIs,
immutable infrastructure, and service meshes to enable efficient and scalable deployment and
management of applications.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 5

You want to push a new image in the Oracle Cloud Infrastructure (OCI) Registry. Which TWO actions
would you need to perform? (Choose two.)

  • A. Generate an API signing key to complete the authentication via Docker CLI.
  • B. Generate an auth token to complete the authentication via Docker CLI.
  • C. Assign an OCI defined tag via OCI CLI to the image.
  • D. Assign a tag via Docker CLI to the image.
  • E. Generate an OCI tag namespace in your repository.
Mark Question:
Answer:

B, D


Explanation:
To push a new image to the Oracle Cloud Infrastructure (OCI) Registry, you would need to perform
the following two actions: Assign a tag via Docker CLI to the image: Before pushing the image, you
need to assign a tag to it using the Docker CLI. The tag helps identify the image and associate it with
a specific version or label. Generate an auth token to complete the authentication via Docker CLI: To
authenticate and authorize the push operation, you need to generate an auth token. This token is
used to authenticate your Docker CLI with the OCI Registry, allowing you to push the image securely.
Note: Generating an API signing key, assigning an OCI defined tag via OCI CLI, and generating an OCI
tag namespace are not required steps for pushing a new image to the OCI Registry.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 6

You plan to implement logging in your services that will run in Oracle Cloud Infrastructure (OCI)
Container Engine for Kubernetes (OKE). Which statement describes the appropriate logging
approach?

  • A. All services log to standard output only.
  • B. Each service logs to its own log file.
  • C. All services log to an external logging system.
  • D. All serviceAAs log to a shared log file.
Mark Question:
Answer:

A


Explanation:
The appropriate logging approach for services running in Oracle Cloud Infrastructure (OCI) Container
Engine for Kubernetes (OKE) is: "All services log to standard output only." When running services in a
containerized environment like OKE, it is recommended to follow the Twelve-Factor App
methodology, which suggests treating logs as event streams. According to this methodology, services
should write their log events to standard output (stdout) instead of writing to log files. By logging to
standard output, the container runtime (such as Kubernetes) can collect and aggregate the logs
generated by the services. These logs can then be accessed and managed through the container
runtime's logging infrastructure. Logging to standard output offers several advantages in a
containerized environment: Simplicity and consistency: Standardizing on logging to stdout ensures a
consistent approach across different services, making it easier to manage and analyze logs. Log
aggregation: The container runtime can collect the logs from all the running containers and provide
centralized log management, allowing you to access and search logs from different services in one
place. Scalability: Since logs are written to stdout, they can be easily handled by the container
runtime's log management system, which can scale to handle large volumes of log data. Separation
of concerns: By logging to stdout, the responsibility of managing log files and their rotation is shifted
to the container runtime, allowing the services to focus on their core functionality. While it is
possible to log to log files or external logging systems, the recommended approach in a containerized
environment like OKE is to log to standard output and leverage the logging infrastructure provided by
the container runtime.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

Which is ONE of the differences between a microservice and a serverless function?

  • A. Microservices are used for long running operations while serverless functions are used for short running operations.
  • B. Microservices are triggered by events while serverless functions are not.
  • C. Microservices are stateless while serverless functions are stateful.
  • D. Microservices always use a data store while serverless functions never use a data store.
Mark Question:
Answer:

A


Explanation:
The correct answer is: Microservices are used for long running operations while serverless functions
are used for short running operations. One of the key differences between microservices and
serverless functions is the duration of their execution. Microservices are typically designed to handle
long-running operations and may continuously run and process requests as part of a larger system.
They are often deployed and managed as long-lived services. On the other hand, serverless functions
are designed to handle short-lived operations or tasks that execute in response to specific events or
triggers. They are event-driven and execute only when invoked, providing a lightweight and
ephemeral computing model. Serverless functions are often used for executing small, isolated pieces
of code without the need for managing infrastructure or scaling concerns. While both microservices
and serverless functions can be stateless or stateful depending on the specific implementation, the
key distinction lies in the typical duration and execution pattern of these components within an
application architecture.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

What are the TWO main reasons you would choose to implement a serverless architecture? (Choose
two.)

  • A. No need for integration testing
  • B. Automatic horizontal scaling
  • C. Easier to run long-running operations
  • D. Reduced operational cost
  • E. Improved in-function state management
Mark Question:
Answer:

B, D


Explanation:
The two main reasons to choose a serverless architecture are: Automatic horizontal scaling:
Serverless architectures allow for automatic scaling of resources based on demand. The
infrastructure automatically provisions and scales resources as needed, ensuring that applications
can handle varying workloads efficiently. This eliminates the need for manual scaling and optimizes
resource utilization. Reduced operational cost: Serverless architectures follow a pay-per-use model,
where you are billed only for the actual execution time and resources consumed by your functions.
This leads to cost savings as you don't have to pay for idle resources. Additionally, serverless
architectures remove the need for managing and maintaining servers, reducing operational
overhead and associated costs. Explanation:: No need for integration testing: Integration testing is
still necessary in serverless architectures to ensure that functions integrate correctly with other
components and services. Serverless functions can interact with various event sources, databases,
and APIs, and testing is required to verify the integration points. Improved in-function state
management: Serverless architectures typically encourage stateless functions that operate on short-
lived requests or events. While there are mechanisms to manage state within a function, serverless
architectures are designed to be stateless by default, promoting scalability and fault tolerance. Easier
to run long-running operations: Serverless functions are generally designed for short-lived
operations rather than long-running tasks. If you have a requirement for long-running operations, a
serverless architecture may not be the ideal choice, as it has execution time limits and may not
provide the necessary resources for extended execution.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 9

You are a developing a microservices application that will be a consumer of the Oracle Cloud
Infrastructure (OCI) Streaming service. Which API method should you use to read and process a
stream?

  • A. GetStream
  • B. ReadMessages
  • C. ProcessStream
  • D. ReadStream
  • E. GetMessages
Mark Question:
Answer:

E


Explanation:
The correct API method to read and process a stream in the Oracle Cloud Infrastructure (OCI)
Streaming service is "GetMessages". When consuming messages from a stream in OCI Streaming,
you use the "GetMessages" API method. This method allows you to retrieve a batch of messages
from the stream for processing. You can specify parameters such as the number of messages to
retrieve, the maximum size of the messages, and the timeout for the request. By using the
"GetMessages" API method, you can retrieve messages from the stream and then process them in
your microservices application. This allows you to consume and handle the data in real-time as it
becomes available in the stream. The "GetMessages" method provides flexibility in how you
consume and process the messages, enabling you to implement custom logic and workflows based
on your specific application requirements.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 10

You have created a repository in Oracle Cloud Infrastructure Registry (OCIR) in the us-ashburn-1 (iad)
region in your tenancy with the heyoci namespace. Which three are valid tags for an image named
myapp? (Choose three.)

  • A. iad.ccir.io/heyoci/myapp:latest
  • B. iad.ocir.io/heyoci/myapp:0.0.2-beta
  • C. iad.ocir.io/heyoci/myproject/myapp:0.0.1
  • D. us-ashburn-1.ocir.io/myproject/heyoci/myapp:latest
  • E. us-ashburn-1.ocir.io/heyoci/myapp:0.0.2-beta
  • F. us-ashburn-1.ocir.io/heyoci/myproject/myapp:0.0.2-beta
Mark Question:
Answer:

A, C, F


Explanation:
(OPT) iad.ocir.io/myproject/heyoci/myapp:latest The three valid tags for an image named myapp in
the heyoci namespace of the Oracle Cloud Infrastructure Registry (OCIR) in the us-ashburn-1 (iad)
region are: iad.ccir.io/heyoci/myapp:latest This tag specifies the latest version of the myapp image in
the heyoci namespace in the iad region. iad.ocir.io/heyoci/myapp:0.0.2-beta This tag specifies
version 0.0.2-beta of the myapp image in the heyoci namespace in the iad region. us-ashburn-
1.ocir.io/heyoci/myapp:0.0.2-beta This tag specifies version 0.0.2-beta of the myapp image in the
heyoci namespace in the us-ashburn-1 region. The other options mentioned are not valid tags for the
given scenario: us-ashburn-1.ocir.io/heyoci/myapp:0.0.2-beta (incorrect format, missing the
namespace) iad.ocir.io/myproject/heyoci/myapp:latest (incorrect format, incorrect order of
namespace and project) us-ashburn-1.ocir.io/myproject/heyoci/myapp:latest (incorrect format,
incorrect order of namespace and project) The correct format for tagging an image in OCIR is
<region>.ocir.io/<namespace>/<repository>:<tag>.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 11

As a Cloud Native developer, you develop two services in Node.js and deploy them to two different
Container Engine for Kubernetes (OKE) clusters that use the same Virtual Cloud Network (VCN). Your
security team wants to analyze the network communication between them. How can this
requirement be met in the most cost-effective way?

  • A. Rewrite the application and send the application logs to an outside log aggregator.
  • B. Deploy a third-party logging service and aggregate the network flow logs.
  • C. Use the OCI Logging service and enable VCN flow logs.
  • D. Deploy Wireshark and intercept the packets.
Mark Question:
Answer:

C


Explanation:
The best answer is: "Use the OCI Logging service and enable VCN flow logs." To meet the
requirement of analyzing network communication between two services deployed in different
Container Engine for Kubernetes (OKE) clusters within the same Virtual Cloud Network (VCN) in a
cost-effective way, you can use the OCI Logging service and enable VCN flow logs. The VCN flow logs
feature in OCI allows you to capture and log network traffic information for your VCN resources. By
enabling VCN flow logs, you can monitor and analyze the network communication between your
services without the need for additional third-party logging services or tools. Enabling VCN flow logs
provides visibility into the network traffic, including source and destination IP addresses, ports,
protocols, and other relevant details. This information can be collected and stored in the OCI Logging
service, where you can analyze and gain insights into the network communication patterns between
your services. By leveraging the built-in capabilities of the OCI Logging service and enabling VCN flow
logs, you can fulfill the security team's requirement for network communication analysis in a cost-
effective manner. This eliminates the need for deploying additional third-party logging services or
tools, reducing complexity and potential costs associated with their setup and maintenance. The
other options mentioned are not the most cost-effective or suitable solutions for analyzing network
communication in this scenario: Deploying a third-party logging service and aggregating the network
flow logs would introduce additional costs and complexity, which may not be necessary considering
the built-in capabilities provided by OCI. Rewriting the application to send logs to an outside log
aggregator would not directly address the requirement of analyzing network communication
between the services. It would focus more on application-level logs rather than network-level
analysis. Deploying Wireshark and intercepting packets would require additional infrastructure setup
and maintenance, which may not be the most cost-effective approach for network analysis in this
scenario.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

Which of the following step is NOT required for setting up the Container Engine for Kubernetes (OKE)
cluster access using a local installation of kubectl?

  • A. Generate Auth token from the OCI console to access the OKE cluster using kubectl.
  • B. Install and configure the Oracle Cloud Infrastructure (OCI) CLI.
  • C. Set up the kubeconfig file.
  • D. Generate an API signing key pair (if you do not already have one) and upload the public key of the API signing key pair.
Mark Question:
Answer:

A


Explanation:
The step that is NOT required for setting up the Container Engine for Kubernetes (OKE) cluster access
using a local installation of kubectl is to generate an Auth token from the OCI console. The
authentication for accessing the OKE cluster using kubectl can be performed using the OCI CLI
configuration, specifically the API signing key pair and the kubeconfig file. Here are the correct steps
for setting up the OKE cluster access using a local installation of kubectl: Set up the kubeconfig file:
The kubeconfig file contains the necessary information to authenticate and access the OKE cluster
using kubectl. It includes details such as the cluster endpoint, authentication method, and
credentials. Generate an API signing key pair (if you do not already have one) and upload the public
key of the API signing key pair: The API signing key pair is used for authentication with the OCI
services. The public key of the key pair needs to be uploaded to the OCI Console to associate it with
your user account. Install and configure the Oracle Cloud Infrastructure (OCI) CLI: The OCI CLI
provides a command-line interface to interact with the OCI services. It needs to be installed and
configured with your OCI credentials, including the user's OCID, tenancy OCID, region, and the path
to the API signing key pair. By completing these steps, you can configure kubectl to access and
manage your OKE clusters from your local machine using the OCI CLI authentication configuration.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

Which TWO statements are correct regarding Docker images and containers? (Choose two.)

  • A. Only one container can be spawned from a given image at a time.
  • B. An image is a collection of immutable layers whereas a container is a running instance of an image.
  • C. If multiple containers are spawned from the same image, then they all use the same copy of that image in memory.
  • D. Writing and building a new Dockerfile is the only way you can create new Docker images.
  • E. A container can exist without an image but an image cannot exist without a container.
Mark Question:
Answer:

B, C


Explanation:
Which TWO statements are correct regarding Docker images and containers? (Choose two.) Writing
and building a new Dockerfile is the only way you can create new Docker images. A container can
exist without an image but an image cannot exist without a container. Only one container can be
spawned from a given image at a time. If multiple containers are spawned from the same image,
then they all use the same copy of that image in memory. (Correct) An image is a collection of
immutable layers whereas a container is a running instance of an image. (Correct) ChatGPT The
correct statements regarding Docker images and containers are: If multiple containers are spawned
from the same image, then they all use the same copy of that image in memory. This statement is
correct. When multiple containers are created from the same Docker image, they share the
underlying layers of the image in memory. Each container has its own isolated filesystem and
runtime environment, but they all use the same base image, which helps in efficient resource
utilization. An image is a collection of immutable layers whereas a container is a running instance of
an image. This statement is also correct. Docker images are composed of multiple layers, with each
layer representing a specific change or addition to the previous layer. These layers are read-only and
can be shared among multiple containers. On the other hand, a container is a lightweight and
isolated runtime instance created from a specific image. It is a running process that has its own
filesystem, network, and runtime configuration. The statements "Writing and building a new
Dockerfile is the only way you can create new Docker images" and "A container can exist without an
image but an image cannot exist without a container" are incorrect. Docker images can be created in
various ways, including using Dockerfiles, importing from existing images, or pulling from registries.
Additionally, a container requires an image to run, so an image must exist before a container can be
created from it.

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 14

Which option best defines microservices?

  • A. A statically typed and compiled language.
  • B. An open-source system for automating deployment, scaling, and management of containerized applications.
  • C. A finely tuned piece of software that performs a single or small collection of tasks.
  • D. An organized collection of structured information or data, typically stored electronically in a computer system.
Mark Question:
Answer:

C


Explanation:
The correct answer is: "A finely tuned piece of software that performs a single or small collection of
tasks." Microservices are a software architectural approach where a system is decomposed into
small, independent services that are responsible for performing a specific set of tasks. Each
microservice is designed to be focused, finely tuned, and highly cohesive, handling a single or a small
collection of related tasks. This granularity allows for better scalability, maintainability, and flexibility
in building complex applications. The other options provided do not accurately define microservices:
An open-source system for automating deployment, scaling, and management of containerized
applications refers to a container orchestration tool like Kubernetes, which can be used to manage
microservices but is not a definition of microservices itself. A statically typed and compiled language
describes a type of programming language characteristic and is not specific to the concept of
microservices. An organized collection of structured information or data, typically stored
electronically in a computer system is a definition of a database or data storage system and is
unrelated to microservices.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

A service you are deploying to Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes
(OKE) uses a docker image from a private repository in OCI Registry (OCIR). Which configuration is
necessary to provide access to this repository from OKE?

  • A. Create a docker-registry secret for OCIR with API key credentials on the cluster, and specify the imagePullSecret property in the application deployment manifest.
  • B. Create a docker-registry secret for OCIR with identity Auth Token on the cluster, and specify the imagePullSecret property in the application deployment manifest.
  • C. Create a dynamic group for nodes in the cluster, and a policy that allows the dynamic group to read repositories in the same compartment.
  • D. Add a generic secret on the cluster containing your identity credentials. Then specify a registryCredentials property in the deployment manifest.
Mark Question:
Answer:

B


Explanation:
The necessary configuration to provide access to a private repository in OCI Registry (OCIR) from OCI
Container Engine for Kubernetes (OKE) is to create a docker-registry secret for OCIR with an identity
Auth Token on the cluster and specify the imagePullSecret property in the application deployment
manifest. Here's the breakdown of the steps: Create a docker-registry secret for OCIR with an identity
Auth Token: In order to authenticate with the private repository in OCIR, you need to create a secret
in your OKE cluster that contains the necessary credentials. This can be done by generating an
identity Auth Token from the OCI Console and creating a secret in the cluster using the kubectl
command. Specify the imagePullSecret property in the application deployment manifest: In your
application's deployment manifest (such as a Kubernetes Deployment or StatefulSet YAML file), you
need to include the imagePullSecret property and specify the name of the secret you created in the
previous step. This allows the OKE cluster to use the credentials from the secret to pull the docker
image from the private repository in OCIR during deployment. By following these steps, you can
ensure that your OKE cluster has the necessary access to the private repository in OCIR, and your
application can successfully pull the required docker image during deployment.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2