A Portworx administrator wants to control which nodes will host a KVDB installation.
What steps must an administrator take to ensure that KVDB installs on NODE01, NODE03, and
NODE05?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx provides a mechanism to control KVDB pod placement through the
kvdb.selector.matchNodeName field in the StorageCluster Custom Resource Definition (CRD). This
allows administrators to explicitly specify node names where KVDB pods will be deployed. By setting
this selector to include NODE01, NODE03, and NODE05, KVDB pods will run exclusively on these
nodes, ensuring better control of quorum, fault tolerance, and performance. Node labeling alone is
insufficient unless the labels are properly referenced in the spec, making direct node name matching
the most straightforward and reliable method. This configuration must be done prior to cluster
installation to ensure proper pod placement. Official Portworx documentation on cluster deployment
and KVDB configuration confirms this method as the recommended best practice for managing KVDB
nodes, critical for maintaining database availability and consistency within the Portworx
cluster
Pure Storage Portworx Install Guide†source
.
【
】
What is the name of the Kubernetes secret containing external KVDB certificates?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The Kubernetes secret named px-kvdb-auth is used to store external KVDB certificates in a Portworx
deployment. These certificates enable mutual TLS authentication for the KVDB pods, ensuring secure
and authenticated communication between the distributed KVDB instances running on different
nodes. The px-kvdb-auth secret includes private keys and certificate chains that are essential for
encrypting KVDB traffic and verifying peer identities within the cluster. This security feature prevents
unauthorized access and protects sensitive KVDB data in transit. Portworx’s official security and
KVDB documentation detail the use of this secret, highlighting its role in certificate management and
enabling encryption for high-availability clusters running on Kubernetes environments
Pure Storage
【
Portworx Security Guide†source
.
】
How should a Portworx administrator expose metrics to externally provisioned Prometheus?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To enable Portworx metrics exposure compatible with external Prometheus servers, administrators
must set the exportMetrics flag inside the Prometheus monitoring section of the StorageCluster
spec. The correct configuration is:
spec:
monitoring:
prometheus:
exportMetrics: true
This declarative configuration directs Portworx to expose its internal metrics on Prometheus
endpoints, allowing external monitoring tools to scrape these metrics for observability, alerting, and
dashboarding. The operator-managed Portworx cluster leverages this configuration for integration
with cloud-native monitoring stacks, ensuring seamless visibility into cluster health, performance,
and resource utilization. Using CLI commands alone is insufficient for operator-managed clusters
since they don’t persist settings or integrate with Kubernetes manifests. The official Portworx
observability guide and operator documentation endorse this method as the recommended
approach for metrics exposure and integration with Prometheus-compatible systems
Pure Storage
【
Portworx Monitoring Guide†source
.
】
What is the primary function of the Portworx OCI monitor pod in a Kubernetes environment?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The Portworx OCI monitor pod primarily monitors the health of Kubernetes nodes within the cluster.
It collects telemetry data and status updates about node health, resource availability, and
connectivity to ensure the Kubernetes environment hosting Portworx pods remains stable and
reliable. This monitoring is vital to detect node failures, performance degradation, or resource
bottlenecks early, enabling prompt remedial action. The OCI monitor acts as a specialized
component interacting with the Kubernetes control plane and Portworx services to provide real-time
node health insights. This role is distinct from installation facilitation or network policy management,
focusing instead on operational observability. Official Portworx operator and observability
documentation describe the OCI monitor’s function as critical for node health monitoring and overall
cluster reliability within Kubernetes environments running Portworx storage
Pure Storage Portworx
【
Observability Docs†source
.
】
Which command could be used to install Portworx on Kubernetes using the PX-Operator?
A.
kubectl apply -f
"https://install.portworx.com/<portworx_version>?operator=true&mc=false&kbver=1.25.0&ns=port
worx&b=true&kd=type%3Dgp3%2Csize%3D150&s=%2F%2F22type%3Dgp3%2Csize%3D150&c=px-
cluster-0584f7fl-b6be-4608-800c-
2ac5fb8069e0&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true"
B.
curl -O px-ag-install.sh -L "https://install.portworx.com/$PXVER/air-gapped?kbver=$KBVER"
C.
kubectl apply -f "https://install.portworx.com/<portworx_version>?comp=pxoperator&kbver=<k8s-
version>&ns=portworx"
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The officially recommended method to install Portworx with Kubernetes Operator support is using
the PX-Operator manifest. This is done by applying the manifest URL with the comp=pxoperator
parameter. The command:
kubectl apply -f "https://install.portworx.com/<portworx_version>?comp=pxoperator&kbver=<k8s-
version>&ns=portworx"
deploys the Portworx Operator, which manages Portworx lifecycle operations such as installation,
upgrades, and configuration changes within the Kubernetes cluster. Specifying the Kubernetes
version (kbver) and namespace (ns) ensures compatibility and proper scoping. This operator-centric
installation enables more efficient management and automation compared to standalone scripts or
manual installations. Portworx official operator installation documentation confirms this approach as
the best practice for production deployments, streamlining Portworx management in Kubernetes
environments
Pure Storage Portworx Operator Installation Guide†source
.
【
】
What information is included in the Portworx diagnostics bundle (diags)?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The Portworx diagnostics bundle, known as “diags,” aggregates comprehensive diagnostic data for
troubleshooting. This includes Portworx journal logs, which record detailed system and service
events essential for identifying errors or malfunctions. Additionally, the bundle contains outputs
from key CLI commands such as pxctl status and pxctl volume list that provide snapshots of the
cluster’s health, volume states, and configuration at the time of collection. Basic operating system
information, including kernel version, disk hardware details, and network interfaces, is also captured
to understand the underlying environment. Together, these components equip Portworx support and
administrators with the contextual data needed for effective root cause analysis and faster issue
resolution. The official Portworx support documentation recommends collecting and submitting this
bundle for all significant troubleshooting cases as it expedites problem diagnosis and
resolution
Pure Storage Portworx Support Guide†source
.
【
】
Which 3 secret stores are supported by Portworx?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx integrates with three primary external secret stores to manage encryption keys securely:
AWS Key Management Service (AWS KMS), Google Cloud Key Management Service (Google Cloud
KMS), and Kubernetes Secrets. AWS KMS enables secure key storage and management for workloads
running in AWS, leveraging native cloud security features. Google Cloud KMS provides similar key
management for Google Cloud environments, allowing seamless integration with Google’s security
infrastructure. Kubernetes Secrets provide an on-premises or hybrid cloud method to store
encryption keys and sensitive configuration securely within Kubernetes clusters, suitable for private
data centers or cloud-agnostic deployments. This multi-cloud and hybrid cloud compatibility enable
Portworx to meet diverse customer requirements for key management and regulatory compliance.
Portworx security documentation details the setup, configuration, and best practices for each
supported secret store to ensure data encryption keys are managed securely and efficiently across
environments
Pure Storage Portworx Security Guide†source
.
【
】
What step is necessary to start using encrypted PVCs in Portworx?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Using encrypted Persistent Volume Claims (PVCs) with Portworx requires that an administrator first
configure a secret provider responsible for managing the encryption keys. The secret provider could
be an external Key Management System (KMS) such as AWS KMS, Google Cloud KMS, Hashicorp
Vault, or Kubernetes Secrets. This step is critical because encryption keys are essential to securely
encrypt and decrypt data on volumes. Although enabling encryption in the StorageClass via
parameters like secure: enabled is necessary to activate encryption on volumes, it is insufficient
without a properly configured secret provider to manage the keys. The secret provider ensures keys
are securely stored, rotated, and accessed, fulfilling compliance and security requirements. Portworx
documentation stresses this as a foundational step to enable encrypted PVCs, highlighting that
without a configured secret provider, encrypted volumes cannot be provisioned or used
effectively
Pure Storage Portworx Encryption Docs†source
.
【
】
What happens if the spec.csi.enabled flag is set to false in the Portworx StorageCluster spec?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The spec.csi.enabled flag in the Portworx StorageCluster specification dictates whether the Container
Storage Interface (CSI) driver is deployed within the Kubernetes environment. Setting this flag to
false means that the CSI driver will not be installed or enabled, effectively disabling the CSI
functionality. The CSI driver is responsible for dynamic volume provisioning, attachment, and
lifecycle management in Kubernetes clusters. Disabling CSI might be necessary in environments
relying on legacy volume plugins or specific operational requirements. When CSI is disabled,
Portworx will not support dynamic provisioning or other CSI-dependent features, which could limit
functionality for Kubernetes storage operations. Portworx operator documentation explicitly states
that disabling CSI omits the CSI driver installation, advising users to carefully consider the impact
before setting this flag to false, especially in production environments requiring CSI
functionality
Pure Storage Portworx Operator Docs†source
.
【
】
When utilizing volume encryption, what is a supported external key manager?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Hashicorp Vault is a widely supported external Key Management System (KMS) integrated with
Portworx for volume encryption. It offers robust capabilities including secure key generation,
storage, rotation, and access control, making it well-suited for managing encryption keys in
enterprise environments. Integrating Portworx with Hashicorp Vault enables automated and secure
key retrieval during volume provisioning and use, ensuring compliance with security policies and
regulations. Unlike static keys stored in S3 buckets, which lack dynamic security controls, Hashicorp
Vault provides granular policy enforcement and audit logging. Microsoft Key Management Services
(KMS) is not currently supported as an external KMS for Portworx encryption. Portworx security
documentation emphasizes Hashicorp Vault’s importance in maintaining secure key lifecycle
management for encrypted volumes, highlighting it as the preferred KMS solution in multi-cloud and
hybrid environments
Pure Storage Portworx Security Guide†source
.
【
】
What command allows a Portworx admin to create a cloud credential for the Object Store?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Portworx, managing credentials for cloud object stores is vital to enable features like cloud
snapshots and backups. The command pxctl credentials create is used to create and register cloud
credentials with the Portworx cluster. This command allows administrators to specify provider details
such as AWS, Google Cloud, or Azure, and input necessary access keys, secret keys, regions, and
endpoints. Proper credential configuration enables Portworx to authenticate with external object
stores securely, ensuring reliable data movement and disaster recovery operations. The CLI facilitates
easy credential management, including listing, updating, and deleting credentials as needed. Official
Portworx documentation highlights pxctl credentials create as the authoritative command for
establishing cloud storage access, ensuring security best practices by managing credentials centrally
within the Portworx control plane
Pure Storage Portworx CLI Guide†source
.
【
】
Which command can be used to migrate volumes after cluster pairing is finished?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Once two Portworx clusters are paired, for example in disaster recovery setups, data migration
between them can be initiated. The command pxctl cloudmigrate start triggers this migration
process. It synchronizes volumes, ensuring that data is copied securely and consistently from the
source cluster to the destination. This migration is transparent to applications and supports
incremental syncs, which helps reduce downtime. The CLI command also provides operational
feedback and logs for administrators to monitor progress. Portworx’s documentation on disaster
recovery workflows emphasizes this command as essential for starting volume migration post-cluster
pairing, streamlining data protection and business continuity strategies across multiple sites or cloud
regions
Pure Storage Portworx Disaster Recovery Guide†source
.
【
】
What feature does a Portworx StorageClass provide to Kubernetes storage?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Kubernetes, StorageClasses define how persistent volumes are dynamically provisioned. A
Portworx StorageClass enables automated provisioning of Portworx volumes in response to
Persistent Volume Claim (PVC) requests. This eliminates the need for administrators to manually
create volumes, improving agility and scalability. The StorageClass encapsulates volume parameters
such as replication factor, encryption, and IO profiles, ensuring consistent storage policies across
deployments. While Portworx offers monitoring and backup capabilities, these are outside the scope
of the StorageClass resource itself. Kubernetes and Portworx documentation detail the StorageClass
as a critical abstraction for enabling self-service storage provisioning, allowing applications to
request storage with specific attributes dynamically and Portworx to satisfy these requests
seamlessly
Pure Storage Portworx Kubernetes Guide†source
.
【
】
What are the two components of Stork?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Stork (Storage Orchestrator for Kubernetes) is a Portworx utility designed to improve Kubernetes
storage orchestration. Its two main components are the Stork scheduler and the Stork extender. The
scheduler works by placing pods in Kubernetes clusters based on storage constraints, such as volume
affinity and anti-affinity, improving application resiliency and data locality. The extender integrates
with Kubernetes’ default scheduler, influencing pod scheduling decisions to respect storage policies
and optimize workload placement. Together, these components enable advanced features such as
application-aware migration, snapshot management, and backup coordination. Portworx
documentation explains that Stork’s design helps maintain stateful application availability during
scaling, upgrades, or disaster recovery scenarios by making Kubernetes scheduling storage-
aware
Pure Storage Portworx Stork Guide†source
.
【
】
Which Portworx component is used to co-locate volumes with pods?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx’s Volume Placement Strategy ensures that persistent volumes are co-located with the pods
that use them, enhancing performance and reducing latency. This strategy involves applying
placement rules and constraints that guide Kubernetes scheduler and Portworx storage operations to
place data volumes on nodes close to or the same as the pods. Co-location improves I/O throughput
and application responsiveness by minimizing network hops between compute and storage
resources. While Autopilot automates scaling and Stork manages storage-aware scheduling, Volume
Placement Strategy specifically handles volume location relative to workloads. The Portworx
architecture documentation highlights this component as critical for optimizing storage efficiency
and workload performance in Kubernetes environments running Portworx storage
Pure Storage
【
Portworx Architecture Docs†source
.
】