nvidia ncp-ain practice test

AI Networking

Last exam update: Nov 18 ,2025
Page 1 out of 5. Viewing questions 1-15 out of 70

Question 1

[InfiniBand Security]
You are concerned about potential security threats and unexpected downtime in your InfiniBand data
center.
Which UFM platform uses analytics to detect security threats, operational issues, and predict
network failures in InfiniBand data centers?

  • A. Host Agent
  • B. Enterprise Platform
  • C. Cyber-AI Platform
  • D. Telemetry Platform
Mark Question:
Answer:

C


Explanation:
The NVIDIA UFM Cyber-AI Platform is specifically designed to enhance security and operational
efficiency in InfiniBand data centers. It leverages AI-powered analytics to detect security threats,
operational anomalies, and predict potential network failures. By analyzing real-time telemetry data,
it identifies abnormal behaviors and performance degradation, enabling proactive maintenance and
threat mitigation.
This platform integrates with existing UFM Enterprise and Telemetry services to provide a
comprehensive view of the network's health and security posture. It utilizes machine learning
algorithms to establish baselines for normal operations and detect deviations that may indicate
security breaches or hardware issues.
Reference: NVIDIA UFM Cyber-AI Documentation v2.9.1

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

[AI Network Architecture]
A financial services company is planning to implement an AI infrastructure to support real-time fraud
detection and risk assessment. They need a solution that can handle both training and inference
workloads while maintaining data privacy and security.
Which NVIDIA reference architecture component would be most appropriate to address the data
privacy and security concerns in this AI networking setup?

  • A. NVIDIA CUDA-X AI libraries
  • B. NVIDIA Magnum IO
  • C. NVIDIA BlueField DPUs
  • D. NVIDIA Spectrum switches
Mark Question:
Answer:

C


Explanation:
NVIDIA BlueField Data Processing Units (DPUs) are integral to securing AI infrastructures, especially
in environments requiring stringent data privacy and security measures. BlueField DPUs offload and
accelerate critical infrastructure tasks such as encryption, firewall enforcement, and intrusion
detection, thereby isolating sensitive data paths from potential threats.
In the context of AI workloads, BlueField DPUs enable secure and efficient data movement between
GPUs and storage systems, ensuring that sensitive information, like financial data, is protected
during both training and inference processes. Their integration into NVIDIA's reference architectures
provides a hardware root of trust, essential for maintaining data integrity and compliance with
security standards.
Reference: NVIDIA BlueField Networking Platform

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

[InfiniBand Security]
You are concerned about potential security threats and unexpected downtime in your InfiniBand data
center.
Which UFM platform uses analytics to detect security threats, operational issues, and predict
network failures in InfiniBand data centers?

  • A. Host Agent
  • B. Enterprise Platform
  • C. Cyber-AI Platform
  • D. Telemetry Platform
Mark Question:
Answer:

C


Explanation:
The NVIDIA UFM Cyber-AI Platform is specifically designed to enhance security and operational
efficiency in InfiniBand data centers. It leverages AI-powered analytics to detect security threats,
operational anomalies, and predict potential network failures. By analyzing real-time telemetry data,
it identifies abnormal behaviors and performance degradation, enabling proactive maintenance and
threat mitigation.
This platform integrates with existing UFM Enterprise and Telemetry services to provide a
comprehensive view of the network's health and security posture. It utilizes machine learning
algorithms to establish baselines for normal operations and detect deviations that may indicate
security breaches or hardware issues.
Reference: NVIDIA UFM Cyber-AI Documentation v2.9.1

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

[InfiniBand Optimization]
A high-performance InfiniBand fabric requires a routing engine that maximizes throughput and
network utilization while reducing congestion. Which option below is the best routing engine for
InfiniBand?

  • A. Adaptive Routing
  • B. Random Routing
  • C. Shortest Path Routing
  • D. Round Robin Routing
Mark Question:
Answer:

A


Explanation:
Adaptive Routing in InfiniBand networks dynamically selects the optimal path for data packets based
on current network conditions, such as congestion levels and link utilization. This approach ensures
that traffic is evenly distributed across the network, preventing bottlenecks and maximizing overall
throughput.
By continuously monitoring the network and adjusting routes in real-time, Adaptive Routing
enhances performance and reliability, making it the preferred choice for high-performance
computing environments where consistent low latency and high bandwidth are critical.
Reference: NVIDIA InfiniBand Adaptive Routing Technology Whitepaper

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

[Spectrum-X Configuration]
You are deploying a Kubernetes cluster for AI workloads using NVIDIA Spectrum-X switches. You need
to automate the deployment and management of networking components in this environment.
Which NVIDIA tool is specifically designed to automate the deployment and management of
networking components in a Kubernetes cluster with Spectrum-X switches?

  • A. Mellanox OFED
  • B. Container Runtime
  • C. Network Operator
  • D. GPU Operator
Mark Question:
Answer:

C


Explanation:
The NVIDIA Network Operator is designed to simplify and automate the deployment and
management of networking components in Kubernetes clusters, particularly those utilizing NVIDIA
Spectrum-X switches. It manages the installation and configuration of necessary drivers, plugins, and
other networking resources to enable features like RDMA and GPUDirect RDMA, which are essential
for high-performance AI workloads.
By leveraging Kubernetes Custom Resource Definitions (CRDs) and the Operator Framework, the
Network Operator ensures that networking components are consistently and correctly configured
across the cluster, reducing manual intervention and potential configuration errors.
Reference: NVIDIA Network Operator Documentation

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

[Spectrum-X Optimization]
You are investigating a performance issue in a Spectrum-X network and suspect there might be
congestion problems.
Which component executes the congestion control algorithm in a Spectrum-X environment?

  • A. BlueField-3 SuperNICs
  • B. NVIDIA DOCA software
  • C. NVIDIA NetQ
  • D. Spectrum-4 switches
Mark Question:
Answer:

A


Explanation:
In the Spectrum-X architecture, BlueField-3 SuperNICs are responsible for executing the congestion
control algorithm. They handle millions of congestion control events per second with microsecond
reaction latency, applying fine-grained rate decisions to manage data flow effectively. This ensures
optimal network performance by preventing congestion and packet loss.
Reference: NVIDIA Spectrum-X Networking Platform

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

[InfiniBand Optimization]
Which of the following routing protocols is not capable of avoiding credit loops?

  • A. UPDOWN
  • B. All routing protocols are capable of avoiding credit loops
  • C. MINHOP
  • D. FAT TREE
Mark Question:
Answer:

C


Explanation:
The MINHOP routing protocol, while efficient in finding minimal paths, does not inherently prevent
credit loops. This can lead to deadlocks in the network. In contrast, routing protocols like UPDOWN
and FAT TREE are designed to avoid such loops, ensuring more reliable network operation.
Reference: Optimized Routing for Large-Scale InfiniBand Networks

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

[Spectrum-X Configuration]
Which of the following commands would you use to assign the IP address 20.11.12.13 to the
management interface in SONiC?

  • A. nv set interface mgmt ip 20.11.12.13 20.11.12.254
  • B. interface mgmt0 vrf mgmt ip address 20.11.12.13 20.11.12.254
  • C. sudo config interface ip add eth0 20.11.12.13/24 20.11.12.254
  • D. config ip add etho 20.11.12.13/24 20.11.12.254
Mark Question:
Answer:

C


Explanation:
In SONiC, to assign a static IP address to the management interface, the correct command is:
sudo config interface ip add eth0 20.11.12.13/24 20.11.12.254
This command sets the IP address and the default gateway for the management interface.
SONiC (Software for Open Networking in the Cloud) is an open-source network operating system
used on NVIDIA Spectrum-X platforms, including Spectrum-4 switches, to provide a flexible and
scalable networking solution for AI and HPC data centers. Configuring the management interface in
SONiC is a critical task for enabling remote access and network management. The question asks for
the correct command to assign the IP address 20.11.12.13 to the management interface, typically
identified as eth0 in SONiC, as it is the default management interface for out-of-band management.
Based on NVIDIA’s official SONiC documentation, the correct command to assign an IP address to the
management interface involves using the config command-line utility, which is part of SONiC’s
configuration framework. The command sudo config interface ip add eth0 20.11.12.13/24
20.11.12.254 is the standard method to configure the IP address and gateway for the eth0
management interface. This command specifies the interface (eth0), the IP address with its subnet
mask (20.11.12.13/24), and the default gateway (20.11.12.254), ensuring proper network
connectivity.
Exact Extract from NVIDIA Documentation:
“To configure the management interface in SONiC, use the config interface ip add command. For
example, to assign an IP address to the eth0 management interface, run:
sudo config interface ip add eth0 <IP_ADDRESS>/<PREFIX_LENGTH> <GATEWAY>
Example:
sudo config interface ip add eth0 20.11.12.13/24 20.11.12.254
This command adds the specified IP address and gateway to the management interface, enabling
network access.”
— NVIDIA SONiC Configuration Guide
This extract confirms that option C is the correct command for assigning the IP address to the
management interface in SONiC. The use of sudo ensures the command is executed with the
necessary administrative privileges, and the syntax aligns with SONiC’s configuration model, which
persists the changes in the configuration database.
Reference: Dell EMC Networking S-Series Basic Switch Management Configuration

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

[AI Network Architecture]
You are optimizing an AI workload that involves multiple GPUs across different nodes in a data
center. The application requires both high-bandwidth GPU-to-GPU communication within nodes and
efficient communication between nodes.
Which combination of NVIDIA technologies would best support this multi-node, multi-GPU AI
workload?

  • A. NVLink for both intra-node and inter-node GPU communication.
  • B. InfiniBand for both intra-node and inter-node GPU communication.
  • C. NVLink for intra-node GPU communication and InfiniBand for inter-node communication.
  • D. PCIe for intra-node GPU communication and RoCE for inter-node communication.
Mark Question:
Answer:

C


Explanation:
For optimal performance in multi-node, multi-GPU AI workloads:
NVLink provides high-speed, low-latency communication between GPUs within the same node.
InfiniBand offers efficient, scalable communication between nodes in a data center.
Combining these technologies ensures both intra-node and inter-node communication needs are
effectively met.
Reference: NVIDIA NVLink & NVSwitch: Fastest HPC Data Center Platform

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

[InfiniBand Configuration]
When designing a multi-tenancy East/West (E/W) fabric using Unified Fabric Manager (UFM), which
method should be used?

  • A. Partition / PKey
  • B. VLAN
  • C. ROMA
  • D. VXLAN
Mark Question:
Answer:

A


Explanation:
In InfiniBand networks, Partitioning using Partition Keys (PKeys) is the standard method for
implementing multi-tenancy and traffic isolation. PKeys allow administrators to define logical
partitions within the fabric, ensuring that traffic is confined to designated groups of nodes. This
mechanism is essential for creating secure and isolated environments in multi-tenant architectures.
The Unified Fabric Manager (UFM) leverages PKeys to manage these partitions effectively, enabling
administrators to assign and control access rights across different tenants. This approach ensures
that each tenant's traffic remains isolated, maintaining both security and performance integrity
within the shared fabric.
Reference: NVIDIA UFM Enterprise User Manual v6.15.6-4

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

[InfiniBand Configuration]
Why is the InfiniBand LRH called a local header?

  • A. It is used for routing traffic between nodes in the local subnet.
  • B. It provides the LIDs from the local subnet manager.
  • C. It allows traffic on a local link only.
  • D. It provides the parameters for each local HCA.
Mark Question:
Answer:

A


Explanation:
The Local Route Header (LRH) in InfiniBand is termed "local" because it is used exclusively for routing
packets within a single subnet. The LRH contains the destination and source Local Identifiers (LIDs),
which are unique within a subnet, facilitating efficient routing without the need for global
addressing. This design optimizes performance and simplifies routing within localized network
segments.
InfiniBand is a high-performance, low-latency interconnect technology widely used in AI and HPC
data centers, supported by NVIDIA’s Quantum InfiniBand switches and adapters. The Local Routing
Header (LRH) is a critical component of the InfiniBand packet structure, used to facilitate routing
within an InfiniBand fabric. The question asks why the LRH is called a “local header,” which relates to
its role in the InfiniBand network architecture.
According to NVIDIA’s official InfiniBand documentation, the LRH is termed “‘local’ because it
contains the addressing information necessary for routing packets between nodes within the same
InfiniBand subnet.” The LRH includes fields such as the Source Local Identifier (SLID) and Destination
Local Identifier (DLID), which are assigned by the subnet manager to identify the source and
destination endpoints within the local subnet. These identifiers enable switches to forward packets
efficiently within the subnet without requiring global routing information, distinguishing the LRH
from the Global Routing Header (GRH), which is used for inter-subnet routing.
Exact Extract from NVIDIA Documentation:
“The Local Routing Header (LRH) is used for routing InfiniBand packets within a single subnet. It
contains the Source LID (SLID) and Destination LID (DLID), which are assigned by the subnet manager
to identify the source and destination nodes in the local subnet. The LRH is called a ‘local header’
because it facilitates intra-subnet routing, enabling switches to forward packets based on LID-based
forwarding tables.”
— NVIDIA InfiniBand Architecture Guide
This extract confirms that option A is the correct answer, as the LRH’s primary function is to route
traffic between nodes within the local subnet, leveraging LID-based addressing. The term “local”
reflects its scope, which is limited to a single InfiniBand subnet managed by a subnet manager.
Reference: LRH and GRH InfiniBand Headers - NVIDIA Enterprise Support Portal

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

[AI Network Architecture]
You are designing a new AI data center for a research institution that requires high-performance
computing for large-scale deep learning models. The institution wants to leverage NVIDIA's reference
architectures for optimal performance.
Which NVIDIA reference architecture would be most suitable for this high-performance AI research
environment?

  • A. NVIDIA Base Command Platform
  • B. NVIDIA DGX Cloud
  • C. NVIDIA LaunchPad
  • D. NVIDIA DGX SuperPOD
Mark Question:
Answer:

D


Explanation:
The NVIDIA DGX SuperPOD is a turnkey AI supercomputing infrastructure designed for large-scale
deep learning and high-performance computing workloads. It integrates multiple DGX systems with
high-speed networking and storage solutions, providing a scalable and efficient platform for AI
research institutions. The architecture supports rapid deployment and is optimized for training
complex models, making it the ideal choice for environments demanding top-tier AI performance.
Reference: DGX SuperPOD Architecture - NVIDIA Docs

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

[InfiniBand Security]
In a multi-tenant InfiniBand environment managed by UFM, you need to configure access controls to
prevent unauthorized users from altering the fabric configuration. Which method is used within UFM
to manage user access and ensure authorized modifications only?

  • A. Digital Certification Management (DCM)
  • B. Network Access Control (NAC)
  • C. Virtual Network Segmentation (VNS)
  • D. Role-Based Access Control (RBAC)
Mark Question:
Answer:

D


Explanation:
Role-Based Access Control (RBAC) is implemented within NVIDIA's Unified Fabric Manager (UFM) to
manage user permissions effectively. RBAC allows administrators to assign roles to users, each with
specific permissions, ensuring that only authorized individuals can make changes to the fabric
configuration. This structured approach to access control enhances security by limiting the potential
for unauthorized modifications and streamlines the management of user privileges across the
network.
Reference: Role-Based Access Control (RBAC) - One Identity

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

[InfiniBand Troubleshooting]
As the network administrator for a large-scale AI research cluster, you are responsible for ensuring
seamless data flow across an InfiniBand east-west fabric that interconnects hundreds of compute
nodes.
Which tool would you use to trace and discover the network paths between nodes on this InfiniBand
east-west fabric?

  • A. NetQ
  • B. ibpathverify
  • C. ibnetdiscover
  • D. tracert
Mark Question:
Answer:

C


Explanation:
The ibnetdiscover utility is used to perform InfiniBand subnet discovery and outputs a human-
readable topology file. GUIDs, node types, and port numbers are displayed, as well as port LIDs and
node descriptions. All nodes and links are displayed, providing a full topology. This utility can also be
used to list the current connected nodes. The output is printed to the standard output unless a
topology file is specified.
InfiniBand is a high-performance, low-latency interconnect technology used in AI and HPC data
centers, particularly for east-west traffic between compute nodes in large-scale fabrics. Ensuring
seamless data flow requires tools to troubleshoot and monitor the network, including the ability to
trace and discover network paths between nodes. The question asks for the specific tool used to
trace and discover paths in an InfiniBand fabric, which is a key task in InfiniBand troubleshooting.
According to NVIDIA’s official InfiniBand documentation, the ibnetdiscover tool is designed to
discover and map the topology of an InfiniBand fabric, including the paths between nodes. It scans
the fabric, queries the subnet manager, and generates a topology map that details the connections
between switches, Host Channel Adapters (HCAs), and other devices. This tool is essential for
verifying connectivity, identifying routing paths, and troubleshooting issues like misconfigured routes
or link failures in large-scale InfiniBand fabrics.
Exact Extract from NVIDIA Documentation:
“The ibnetdiscover tool is used to discover the InfiniBand fabric topology and generate a map of the
network. It queries the subnet manager to retrieve information about all nodes, switches, and links
in the fabric, providing a detailed view of the paths between nodes. This tool is critical for
troubleshooting connectivity issues and ensuring proper routing in InfiniBand networks.”
— NVIDIA InfiniBand Networking Guide
This extract confirms that ibnetdiscover is the correct tool for discovering network paths in an
InfiniBand east-west fabric. It provides a comprehensive view of the fabric’s topology, enabling
administrators to trace paths between compute nodes and ensure seamless data flow.
Reference: InfiniBand Fabric Utilities - NVIDIA Docs

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

[Spectrum-X Configuration]
When upgrading Cumulus Linux to a new version, which configuration files should be migrated from
the old installation?
Pick the 2 correct responses below.

  • A. All files in /etc/cumulus/acl
  • B. All files in /etc/network
  • C. All files in /etc
  • D. All files in /etc/mix
Mark Question:
Answer:

A, B


Explanation:
Before upgrading Cumulus Linux, it's essential to back up configuration files to a different server. The
/etc directory is the primary location for all configuration data in Cumulus Linux. Specifically, the
following files and directories should be backed up:
/etc/frr/ - Routing application (responsible for BGP and OSPF)
/etc/hostname - Configuration file for the hostname of the switch
/etc/network/ - Network configuration files, most notably /etc/network/interfaces and
/etc/network/interfaces.d/
/etc/cumulus/acl - Access control list configurations
Cumulus Linux is a network operating system used on NVIDIA Spectrum switches, including those in
the Spectrum-X platform, to provide a Linux-based environment for Ethernet networking in AI and
HPC data centers. When upgrading Cumulus Linux to a new version, it’s critical to migrate specific
configuration files to preserve network settings and ensure continuity. The question asks for the two
configuration file locations that should be migrated from the old installation during an upgrade.
According to NVIDIA’s official Cumulus Linux documentation, the key directories containing
configuration files that should be migrated during an upgrade are /etc/cumulus/acl (for access
control list configurations) and /etc/network (for network interface configurations). These directories
store critical network settings that define the switch’s behavior, such as ACL rules and interface
settings, which must be preserved to maintain network functionality after the upgrade.
Exact Extract from NVIDIA Documentation:
“When upgrading Cumulus Linux, you must back up and migrate specific configuration files to ensure
continuity of network settings. The following directories should be included in the backup:
/etc/cumulus/acl: Contains access control list (ACL) configuration files that define packet filtering and
security policies.
/etc/network: Contains network interface configuration files, such as interfaces and ifupdown2
settings, which define the network interfaces and their properties.
Back up these directories before upgrading and restore them after the new version is installed to
maintain consistent network behavior.”
— NVIDIA Cumulus Linux Upgrade Guide
This extract confirms that options A and B are the correct answers, as /etc/cumulus/acl and
/etc/network contain essential configuration files that must be migrated during a Cumulus Linux
upgrade. These files ensure that ACL policies and network interface settings are preserved, which are
critical for Spectrum-X configurations in AI networking environments.
Reference: Upgrading Cumulus Linux - NVIDIA Docs

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2