Pegasystems pegacplsa23v1 practice test

Certified Pega Lead System Architecture (LSA) Exam 23

Last exam update: Nov 18 ,2025
Page 1 out of 4. Viewing questions 1-15 out of 60

Question 1

[Performance Optimization]
Data pages offer a "Do not reload when" option for optimizing data retrieval. Which two of the
following scenarios make appropriate use of this feature? (Choose Two)

  • A. A data page tracks real-time seat availability for a theater. Because seat bookings stop once the show starts, set it to not reload after the show begins until it ends.
  • B. A data page maintains reservation details for a hotel. To optimize performance, set it to not reload after check-in time ends, because no new reservations are taken overnight unless you have approval from the manager of the hotel.
  • C. A data page pulls live traffic updates for a delivery service's routing system. Because traffic patterns stabilize late at night, set it to not reload from late evening until early morning.
  • D. A data page holds the daily menu for a cafeteria that doesn't change the menu once the kitchen opens. To avoid unnecessary updates, set it to not reload after the kitchen begins its operations until the next day.
Mark Question:
Answer:

A,D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

[Data Modeling]
In a flight schedule management application, the initiation of a scheduled flight case triggers both
pre-flight check and flight catering service processes. These processes require access to flight
information, such as flight number, cabin class, number of seats, departure date and time, while also
maintaining process-specific data elements. Which one of the following options best describes the
optimal data model for meeting this requirement?

  • A. Set Schedule Flight as the parent case type, with the triggered processes as child case types. Store flight data within the Schedule Flight case. This data will then bepropagated to the child cases upon their creation.
  • B. Set Schedule Flight as the parent case type, with the triggered processes as child case types. Place flight data in the travel management enterprise layer and create data classes specific to each child case type, inheriting directly from the travel management enterprise layer.
  • C. Set Schedule Flight as the parent case type, with the triggered processes as child case types. Place flight data within the Schedule Flight class and develop data classes specific to each child case type.
  • D. Set Schedule Flight as the parent case type, with the triggered processes as child case types. Place flight data within the work pool class, allowing all three case types to inherit properties from the work pool class.
Mark Question:
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

[Data Modeling]
You are a Pega developer working on an insurance application. The application needs to manage
different types of insurance policies such as car insurance, home insurance, and life insurance. Each
type of insurance policy has some common attributes (policy number, policyholder name, and
premium amount), but also has some unique attributes (such as vehicle details for car insurance,
property details for home insurance, and beneficiary details for life insurance). Which one of the
following approaches to handling this scenario would be most appropriate in a Pega application?

  • A. Create a single class for all types of insurance policies and dynamically add or remove attributes as needed.
  • B. Create a single class for all types of insurance policies and define all possible attributes.
  • C. Create a base class for the insurance policy for the common attributes, and then create derived classes for each type of insurance policy with their unique attributes.
  • D. Create a separate class for each type of insurance policy and then define the common attributes in the new class.
Mark Question:
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

[Data Modeling]
In a hospital's patient management Pega application, patient details are gathered during the initial
consultation process. This information must be accessible and current for all subsequent
appointments and treatments. Keeping patient information updated is crucial to effective planning
and implementation of treatment. Which one of the following options would you select as a
solution?

  • A. A portal to collect patient data. Pre-load the patient's information into the system for each subsequent appointment and treatment, based on the outcomes of the initial consultation.
  • B. A portal for updating patient data, using the snapshot data access pattern to access patient information for appointments and treatment processes.
  • C. A portal to collect patient data and store the data with Consultation cases. Use data propagation features to transfer patient information to each subsequent appointment or treatment as they are scheduled.
  • D. A portal for updating patient data, utilizing the System of Record (SOR) data access pattern to access patient information for appointments and treatment processes.
Mark Question:
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

[Data Modeling]
As an LSA developing a Pega application for an online grocery store, you are tasked with enabling
customers to navigate through various categories such as "Dairy," "Confectionery," "Frozen Food,"
and "Soft Drinks." Each category contains at least 10 sub-categories, with the workflow varying
depending on the selected sub-category. What is the best method of populating the categories and
sub-categories and retrieving the related information from the grocery store's database?

  • A. Implement a data page that accepts either a Category or Sub-category as a parameter. Based on the parameter type, the required information is retrieved and displayed in the subsequent layouts.
  • B. Implement a data page for Sub-categories. Load Categories by default upon startup. Populate Sub- categories after a Category is selected, using a Sub-category data page that takes the Category as a parameter.
  • C. Implement data pages for Categories and Sub-categories. Populate Sub-categories after a Category is selected, using a Sub-category data page that takes the Category as a parameter.
  • D. Implement a data page that takes the Sub-category as a parameter. Based on the Sub-category type, the necessary information is retrieved and shown in the subsequentpf layouts.
Mark Question:
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

[Data Modeling]
Dynamic class references (DCR) are one approach to creating a data model in a Pega implementation.
Which one of the following best describes the DCR approach?

  • A. Associating objects with classes at runtime based on user input or certain conditions, allowing for flexibility in the structure of the data model.
  • B. Linking different classes together in a data model to establish relationships and associations between various entities.
  • C. Dynamically adjusting the attributes of a class without modifying its structure, facilitating agile data management.
  • D. Referencing classes that are predefined and unchangeable, ensuring consistency and reliability in the data model.
Mark Question:
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

[Work Delegation and Asynchronous Processing]
How can you configure a flow to resume from the point of failure after an error is resolved?

  • A. By using the Resume flow option in the Actions menu.
  • B. By using the Ticket shape and setting the ticket name to pyRestartFlow.
  • C. By using the Subprocess shape and calling the pyRestartFlow flow.
  • D. By using the Assignment shape and setting the assignment type to Resume.
Mark Question:
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

[Work Delegation and Asynchronous Processing]
Consider a scenario where an e-commerce company is using a Job Scheduler to manage various
tasks. The Job Scheduler is responsible for updating inventory, processing orders, sending order
confirmation emails, and generating daily sales reports. Which two of the following are typical
features of a Job Scheduler? (Choose Two)

  • A. It can automatically retry failed tasks.
  • B. It can run tasks in a specific order.
  • C. It can automatically commit the data updates.
  • D. It can run tasks in parallel.
Mark Question:
Answer:

A,D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

[Work Delegation and Asynchronous Processing]
The ABC trip planner company offers a variety of services, for example, hotel bookings, flight
bookings, and train bookings, all on a single platform. All the services offered havetheir invoice
processing managed by a dedicated team of accounting specialists from the XYZ financial accounting
organization. Every booking made by a customer creates a "Booking" case, which manages the
booking of the required service. An "Invoice" case is also created for the verification and validation of
all payments to the various stakeholders involved. For security reasons, the Invoice case contains
limited information and cannot be a child case of Booking. Both are siblings that update each other.
Which one of the following is the best possible solution to implement this requirement?

  • A. Have a data object that connects both the sibling cases. Use a Job Scheduler that runs every one minute to query records from the data object and update the sibling case as required. Use system context to gain access to the sibling case.
  • B. Have a data object that connects both the sibling cases. Use a Job Scheduler that runs every one minute to query records from the data object and update the sibling case as required.
  • C. Have a data object that connects both the sibling cases. Use a queue processor (Dedicated/Standard) to process the record for status updates. Update the security context while queuing with the appropriate access group to gain access to the sibling case.
  • D. Have a data object that connects both the sibling cases. Use a queue processor (Dedicated/Standard) to process the record for status updates. Use the current operator access group context while queuing to update the case.
Mark Question:
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

[Work Delegation and Asynchronous Processing]
What is the main difference between a Data Flow and a Queue Processor?

  • A. Queue Processors can process a single item immediately, while Data Flows cannot.
  • B. Data Flows can process data asynchronously, while Queue Processors cannot.
  • C. Data Flows can be scheduled to run at specific times, while Queue Processors cannot.
  • D. Queue Processors can process large volumes of data, while Data Flows cannot.
Mark Question:
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

[Integration]
U+ Bank has a customer service application that processes customer complaints. Now, after three
years in production, the operations manager needs historical reports on resolved cases. The reports
should be sent in near real-time. The data warehouse has exposed a REST API to receive the data,
and the reports are then generated from the data warehouse. Which two of the following options
could you use to create an ideal design solution for posting the data to the data warehouse? (Choose
Two)

  • A. Read data with data flows, which source data by using a dataset and then output the data to a utility that synchronously posts the data to the data warehouse. For in-flight cases, on resolution of the case, configure the system to post the data to the data warehouse over REST.
  • B. Prepare an extract rule and extract the data of already-resolved cases, and then load it into the data warehouse for reporting. For in-flight cases, on resolution of a case, configure the system to post the data to the data warehouse over REST.
  • C. Read data with data flows, which source data by using a dataset and then output the data to a utility that posts the data to the queue processor, which then posts the data tothe data warehouse over REST. For in-flight cases, on resolution of a case, reuse a queue processor that you created.
  • D. Run a one-time utility that browses all the resolved-cases data, and then asynchronously posts the data to the data warehouse. For in-flight cases, on resolution of a case, configure the system to synchronously post the data to the data warehouse over REST.
Mark Question:
Answer:

B,C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

[Work Delegation and Asynchronous Processing]
Which two statements are true about asynchronous processing? (Choose Two)

  • A. Job schedulers can have associates with more than one node type.
  • B. Queue processors can have associates with more than one node type.
  • C. Job schedulers use a Kafka-distributed streaming service to achieve maximum throughput.
  • D. The replication factor of the queue processor ensures reliability and fault tolerance.
Mark Question:
Answer:

B,D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

[Work Delegation and Asynchronous Processing]
Hospital XYZ wants to analyze patient data and identify those at high risk of readmission. The
hospital has patient data stored in an Electronic Health Record (EHR) system, and a separate system
for hospital admission records. Which two of the following are key advantages of using a Data Flow
over a Queue Processor for analyzing patient data in this scenario? (Choose Two)

  • A. Data Flow can directly access data from multiple systems.
  • B. Data Flow can handle larger volumes of data.
  • C. Data Flow can prioritize tasks based on urgency.
  • D. Data Flow can process data in real-time.
Mark Question:
Answer:

A,B

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

[Integration]
Every day at 1 AM, all the ATM transactions at ABC Bank from the previous day are shared with the
head office of the bank. All ATM machines perform this data sharing. ABC Bank uses this information
to validate transactions and balance all the ledgers. If any discrepancy is identified, a dispute
resolution flow initiates to investigate the root cause and resolve the dispute. ABC Bank has 1 million
ATMs for which transactions need to be analyzed for discrepancies. Which one of the following is the
optimal solution for gathering the transaction information from all the ATMs?

  • A. The ATM machines generate an Excel file of all the transactions and place it in a NAS directory. Pega workflow processes the files using an Advanced agent.
  • B. The ATM machines generate an Excel file of all the transactions and place it in a NAS directory. Pega workflow processes the files using a Queue Processor.
  • C. The ATM machines generate an Excel file of all the transactions and place it in a NAS directory. Pega workflow processes the files using a Job Scheduler.
  • D. The ATM machines generate an Excel file of all the transactions and place it in a NAS directory. Pega workflow processes the files using a Data Set of type File, and feeds the files into a Data Flow.
Mark Question:
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

[Pega Platform Architecture]
A healthcare company wants to migrate its patient management system to a more secure and
scalable environment. After evaluating different options, the company decides to host their
application on a Pega-managed cloud environment, expecting it to meet its growing data security
and compliance needs while ensuring uninterrupted access to patient records. In this context, which
three of the following responsibilities does Pega assume in managing the healthcare company’s
application in the Pega Cloud environment? (Choose Three)

  • A. Monitor network and ensure system-level access control to safeguard patient information and system integrity.
  • B. Install Pega Platform and any additional Pega applications to which the healthcare company subscribes.
  • C. Securely store the healthcare company’s data in a cloud repository, which helps ensure data privacy and compliance with healthcare regulations.
  • D. Oversee compliance, conduct security monitoring, and respond to security events to protect against potential breaches or threats.
  • E. Set up the necessary authentication and authorization mechanisms to control access to sensitive patient data.
Mark Question:
Answer:

A,C,D

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000
To page 2