Container Application Software for Enterprises (CASE) is a specification that defines what?
A
Explanation:
Container Application Software for Enterprises (CASE) is a specification that defines the metadata
and structure for packaging, managing, and unpacking containerized applications. CASE allows the
creation of a standard format for containerized applications that can be easily shared and deployed
across different environments. It defines the metadata, structure, and packaging of containerized
applications, and provides a way to manage and distribute them across different platforms and
environments.
https://developer.ibm.com/blogs/container-application-software-for-enterprises-packaging-spec/
What is the benefit of using the Rsyslog Sidecar?
A
Explanation:
https://www.ibm.com/docs/en/cpfs?topic=operator-architecture-audit-logging-version-370
Rsyslog is an open-source software utility for forwarding log messages in an IP network. It can act as
a centralized log server and can also be used as a sidecar in a Kubernetes environment. The Rsyslog
sidecar can be used to provide easier adoption of audit logging by shifting the burden of transmitting
log messages to the sidecar. This allows for a more streamlined process for implementing audit
logging within a Kubernetes environment.
Reference:
https://rsyslog.com/
https://kubernetes.io/docs/concepts/cluster-administration/logging/
A business user wants to integrate events coming from BPMN workflows and from ADS. Which setup
would serve this purpose?
D
Explanation:
Kafka is a distributed streaming platform that can be used to integrate events coming from BPMN
workflows and from ADS (Application Data Sources). The Kafka unified data model allows for the
standardized and centralized handling of data from various sources, including BPMN workflows and
ADS. This setup can be used to standardize and normalize the data, which can then be used for
analytics, reporting, and real-time processing.
Reference:
https://kafka.apache.org/
https://kafka.apache.org/documentation/streams/
https://kafka.apache.org/documentation/streams/core-concepts
What kind of data is written to the Business Automation Workflow transaction log file?
B
Explanation:
The Business Automation Workflow (BAW) transaction log file contains information about the data
that is written to databases. It records the data that is written to the database as part of a BAW
process. This can include information such as the process instance ID, the task that was executed, the
user that performed the task, and the data that was written to the database. This log file is useful for
troubleshooting and auditing purposes.
Reference:
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.baw/t_baw_transaction_log.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_18.0.x/com.ibm.dba.baw/t_baw_transaction_log_file.html
Which item is best for troubleshooting FileNet Content Engine authentication issues?
C
Explanation:
It contains details about the security credentials used to access the FileNet Content Engine and if
there are any authentication errors, you can see them recorded in systemout.log. Other useful
references for troubleshooting authentication issues include the IBM Knowledge Center and IBM
Support, as well as the IBM FileNet Content Engine forum.
Prior to deploying the Cloud Pak for Business Automation operator, which two common prerequisites
exist for all Cloud Pak for Business Automation capabilities (excluding Business Automation Insights)?
BC
Explanation:
Before deploying the Cloud Pak for Business Automation operator, it is important to ensure that the
necessary database and persistent volume are in place. This will allow the operator to store and
persist data as needed. The other options are not prerequisites for all Cloud Pak for Business
Automation capabilities.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_9.7.1/com.ibm.wbpm.inst.doc/topics/t_k8s_prereq.html
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_9.7.1/com.ibm.wbpm.inst.doc/topics/t_k8s_install_operator.html
Which permission can be granted in order to see the RPA Server option in the Platform Ul navigation
menu?
B
Explanation:
The rpa-owner permission must be granted in order for the option to appear in the Platform UI
navigation menu. If you would like to grant the permission, please follow these steps: 1) Log into the
Robotic Process Automation (RPA) server 2) Navigate to the Settings tab 3) Select the Security tab 4)
Select the Roles & Permissions tab 5) Select the rpa-owner permission 6) Click Save.
Which two roles have the permission to connect to an LDAP directory?
BC
Explanation:
The two roles that have the permission to connect to an LDAP directory are Cloud Pak Administrator
and Cluster Administrator.
Cloud Pak Administrator is a role that has the highest level of access and can perform all the
administrator tasks for the Cloud Pak.
Cluster Administrator is a role that has the permission to manage the resources of a Kubernetes
cluster such as connecting to an LDAP directory, configuring security settings, and managing users
and roles.
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.rpa/rpa_security_authorization.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.rpa/rpa_user_manage_users.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.rpa/rpa_sec
urity_permission.html
What kind of probe can be used to determine if an application running in a pod is healthy?
A
Explanation:
The most suitable probe in this case would be a liveness probe. This type of probe is used to detect if
an application is running correctly and is able to respond to requests. It is usually used in conjunction
with a readiness probe to ensure that the application is both healthy and ready to serve requests.
The migration of data from Cloud Pak for Business Automation versions that do not support an
upgrade require an Administrator to follow which process?
B
Explanation:
Migrating data from a Cloud Pak for Business Automation version that does not support an upgrade
requires an Administrator to uninstall the current deployment and follow the migration instructions
for each component, pointing them to the existing persistent stores
When migrating data from Cloud Pak for Business Automation versions that do not support an
upgrade, an Administrator must uninstall the current deployment and follow the migration
instructions for each component to point to the existing persistent stores. This process involves
migrating data from the existing databases and persistent volumes to the new deployment. This
process may also require creating new configurations and customizing the new deployment to match
the previous one.
Reference:
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics/install_migrate_data.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics/install_migrate_data_prereq.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics
/install_migrate_data_procedure.html
During a Cloud Pak for Business Automation installation, which component helps install, update, and
manage the lifecycle of all operators and services that are deployed in OpenShift Container Platform
clusters?
B
Explanation:
It is a component of the Cloud Pak for Business Automation which helps install, update and manage
the lifecycle of all operators and services that are deployed in OpenShift Container Platform clusters.
Reference:
https://www.ibm.com/blogs/cloud-pak-for-automation/what-is-the-operator-lifecycle-
manager-olm/.
Where do the images reside for an air-gapped Cloud Pak for Business Automation upgrade?
C
Explanation:
When performing an air-gapped upgrade of Cloud Pak for Business Automation, the images used for
the upgrade reside in a local registry. An air-gapped environment is one in which there is no external
network access, so the images cannot be pulled from a remote registry such as IBM registry, RedHat
quay.io registry, or Docker Hub. Instead, the images must be pre-pulled and stored in a local registry
that is accessible to the OpenShift cluster.
Reference:
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics/install_airgap_prereq.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics/install_airgap_procedure.html
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics
/install_airgap_registry.html
What is a best practice for application pod high availability?
D
Explanation:
A best practice for application pod high availability is to use multiple pods across both worker and
master nodes. This ensures redundancy and increased availability of the application. Additionally,
using multiple small pods on a master node can improve resource utilization and reduce overall
costs. For more information on best practices for application pod high availability, please refer to the
following references: 1. "Pod Autoscaler Best Practices" from Kubernetes Docs:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-best-practices/ 2. "High
Availability
for
Containerized
Workloads
with
Kubernetes"
from
AWS:
https://aws.amazon.com/blogs/compute/high-availability-for-containerized-workloads-with-
kubernetes/
.
A starter deployment requires which two capabilities to be installed independently?
A
Explanation:
When deploying the Cloud Pak for Business Automation, the starter deployment requires one
capability to be installed independently:
Operational Decision Manager (ODM) and Automation Decision Services (ADS): These capabilities
provide a set of tools for creating and managing business rules, decision services, and analytics. They
are typically used to automate decision-making processes within an organization.
https://www.ibm.com/support/knowledgecenter/en/SSYHZ8_20.0.x/com.ibm.dba.baw.install/topics/install_overview.html
How are the parameters set for accessing the images in an OpenShift Container Platform
environment?
C
Explanation:
The parameters for accessing the images in an OpenShift Container Platform environment are set
using the oc set command. This command allows you to set image pull secrets and other image-
related parameters.
Reference:
[1]
https://docs.openshift.com/container-platform/4.4/openshift_images/image-pull-secrets.html
[2]
https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_cli/oc-set.html
In an OpenShift Container Platform environment, the parameters for accessing images are set using
the "oc set" command. The "oc set" command allows an administrator to configure various aspects
of a deployment, such as environment variables, resource limits, and image pull secrets. These
parameters can be set for a specific deployment, service, pod, or other resource in the OpenShift
cluster.