Salesforce salesforce ai specialist practice test

Salesforce Certified AI Specialist

Last exam update: Nov 18 ,2025
Page 1 out of 11. Viewing questions 1-15 out of 152

Question 1

Leadership needs to populate a dynamic form field with a summary or description created by a large
language model (LLM) to facilitate more productive conversations with customers. Leadership also
wants to keep a human in the loop to be considered in their AI strategy.
Which prompt template type should the AI Specialist recommend?

  • A. Sales Email
  • B. Field Generation
  • C. Record Summary
Mark Question:
Answer:

B


Explanation:
The correct answer is Field Generation because this template type is designed to dynamically
populate form fields with content generated by a large language model (LLM). In this scenario,
leadership wants a dynamic form field that contains a summary or description generated by AI to aid
customer interactions. Additionally, they want to keep a human in the loop, meaning the generated
content will likely be reviewed or edited by a person before it's finalized, which aligns with the Field
Generation prompt template.
Field Generation: This prompt type allows you to generate content for specific fields in Salesforce,
leveraging large language models to create dynamic and contextual information. It ensures that AI
content is available within the record where needed, but it allows human oversight or review,
supporting the "human-in-the-loop" strategy.
Sales Email: This prompt type is mainly used for generating email content for outreach or responses,
which doesn't align directly with populating fields in a form.
Record Summary: While this option might seem close, it is typically used to summarize entire records
for high-level insights rather than filling specific fields with dynamic content based on AI generation.
Salesforce AI Specialist Reference:
You can explore more about these prompt templates and AI capabilities through Salesforce
documentation and official resources on Prompt Builder:
https://help.salesforce.com/s/articleView?id=sf.prompt_builder_templates_overview.htm

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 2

Universal Containers is considering leveraging the Einstein Trust Layer in conjunction with Einstein
Generative AI Audit Data.
Which audit data is available using the Einstein Trust Layer?

  • A. Response accuracy and offensiveness score
  • B. Hallucination score and bias score
  • C. Masked data and toxicity score
Mark Question:
Answer:

C


Explanation:
Universal Containers is considering the use of the Einstein Trust Layer along with Einstein Generative
AI Audit Data. The Einstein Trust Layer provides a secure and compliant way to use AI by offering
features like data masking and toxicity assessment.
The audit data available through the Einstein Trust Layer includes information about masked data—
which ensures sensitive information is not exposed—and the toxicity score, which evaluates the
generated content for inappropriate or harmful language.
Reference:
Salesforce AI Specialist Documentation - Einstein Trust Layer: Details the auditing capabilities,
including logging of masked data and evaluation of generated responses for toxicity to maintain
compliance and trust.

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 3

Universal Containers wants to make a sales proposal and directly use data from multiple unrelated
objects (standard and custom) in a prompt template.
What should the AI Specialist recommend?

  • A. Create a Flex template to add resources with standard and custom objects as inputs.
  • B. Create a prompt template passing in a special custom object that connects the records temporarily,
  • C. Create a prompt template-triggered flow to access the data from standard and custom objects.
Mark Question:
Answer:

A


Explanation:
Universal Containers needs to generate a sales proposal using data from multiple unrelated standard
and custom objects within a prompt template. The most effective way to achieve this is by using a
Flex template.
Flex templates in Salesforce allow AI specialists to create prompt templates that can accept inputs
from multiple sources, including various standard and custom objects. This flexibility enables the
direct use of data from unrelated objects without the need to create intermediary custom objects or
complex flows.
Reference:
Salesforce AI Specialist Documentation - Flex Templates: Explains how Flex templates can be utilized
to incorporate data from multiple sources, providing a flexible solution for complex data
requirements in prompt templates.

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 4

What is an AI Specialist able to do when the “Enrich event logs with conversation data" setting in
Einstein Copilot is enabled?

  • A. View the user click path that led to each copilot action.
  • B. View session data including user Input and copilot responses for sessions over the past 7 days.
  • C. Generate details reports on all Copilot conversations over any time period.
Mark Question:
Answer:

B


Explanation:
When the "Enrich event logs with conversation data" setting is enabled in Einstein Copilot, it allows
an AI Specialist or admin to view session data, including both the user input and copilot responses
from interactions over the past 7 days. This data is crucial for monitoring how the copilot is being
used, analyzing its performance, and improving future interactions based on past inputs.
This setting enriches the event logs with detailed conversational data for better insights into the
interaction history, helping AI specialists track AI behavior and user engagement.
Option A, viewing the user click path, focuses on navigation but is not part of the conversation data
enrichment functionality.
Option C, generating detailed reports over any time period, is incorrect because this specific feature
is limited to data for the past 7 days.
Salesforce AI Specialist Reference:
You can refer to this documentation for further insights:
https://help.salesforce.com/s/articleView?id=sf.einstein_copilot_event_logging.htm

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 5

Universal Containers’ current AI data masking rules do not align with organizational privacy and
security policies and requirements.
What should an AI Specialist recommend to resolve the issue?

  • A. Enable data masking for sandbox refreshes.
  • B. Configure data masking in the Einstein Trust Layer setup.
  • C. Add new data masking rules in LLM setup.
Mark Question:
Answer:

B


Explanation:
When Universal Containers' AI data masking rules do not meet organizational privacy and security
standards, the AI Specialist should configure the data masking rules within the Einstein Trust Layer.
The Einstein Trust Layer provides a secure and compliant environment where sensitive data can be
masked or anonymized to adhere to privacy policies and regulations.
Option A, enabling data masking for sandbox refreshes, is related to sandbox environments, which
are separate from how AI interacts with production data.
Option C, adding masking rules in the LLM setup, is not appropriate because data masking is
managed through the Einstein Trust Layer, not the LLM configuration.
The Einstein Trust Layer allows for more granular control over what data is exposed to the AI model
and ensures compliance with privacy regulations.
Salesforce AI Specialist Reference:
For more information, refer to:
https://help.salesforce.com/s/articleView?id=sf.einstein_trust_layer_data_masking.htm

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 6

An administrator wants to check the response of the Flex prompt
template they've built, but the preview button is greyed out.
What is the reason for this?

  • A. The records related to the prompt have not been selected.
  • B. The prompt has not been saved and activated,
  • C. A merge field has not been inserted in the prompt.
Mark Question:
Answer:

A


Explanation:
When the preview button is greyed out in a Flex prompt template, it is often because the records
related to the prompt have not been selected. Flex prompt templates pull data dynamically from
Salesforce records, and if there are no records specified for the prompt, it can't be previewed since
there is no content to generate based on the template.
Option B, not saving or activating the prompt, would not necessarily cause the preview button to be
greyed out, but it could prevent proper functionality.
Option C, missing a merge field, would cause issues with the output but would not directly grey out
the preview button.
Ensuring that the related records are correctly linked is crucial for testing and previewing how the
prompt will function in real use cases.
Salesforce AI Specialist Reference:
Refer to the documentation on troubleshooting Flex templates here:
https://help.salesforce.com/s/articleView?id=sf.flex_prompt_builder_troubleshoot.htm

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 7

Universal Containers’ data science team is hosting a generative large language model (LLM) on
Amazon Web Services (AWS).
What should the team use to access externally-hosted models in the Salesforce Platform?

  • A. Model Builder
  • B. App Builder
  • C. Copilot Builder
Mark Question:
Answer:

A


Explanation:
To access externally-hosted models, such as a large language model (LLM) hosted on AWS, the Model
Builder in Salesforce is the appropriate tool. Model Builder allows teams to integrate and deploy
external AI models into the Salesforce platform, making it possible to leverage models hosted
outside of Salesforce infrastructure while still benefiting from the platform's native AI capabilities.
Option B, App Builder, is primarily used to build and configure applications in Salesforce, not to
integrate AI models.
Option C, Copilot Builder, focuses on building assistant-like tools rather than integrating external AI
models.
Model Builder enables seamless integration with external systems and models, allowing Salesforce
users to use external LLMs for generating AI-driven insights and automation.
Salesforce AI Specialist Reference:
For more details, check the Model Builder guide here:
https://help.salesforce.com/s/articleView?id=sf.model_builder_external_models.htm

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 8

An AI Specialist built a Field Generation prompt template that worked for many records, but users
are reporting random failures with token limit errors.
What is the cause of the random nature of this error?

  • A. The number of tokens generated by the dynamic nature of the prompt template will vary by record.
  • B. The template type needs to be switched to Flex to accommodate the variable amount of tokens generated by the prompt grounding.
  • C. The number of tokens that can be processed by the LLM varies with total user demand.
Mark Question:
Answer:

A


Explanation:
The reason behind the token limit errors lies in the dynamic nature of the prompt template used in
Field Generation. In Salesforce's AI generative models, each prompt and its corresponding output are
subject to a token limit, which encompasses both the input and output of the large language model
(LLM). Since the prompt template dynamically adjusts based on the specific data of each record, the
number of tokens varies per record. Some records may generate longer outputs based on their data
attributes, pushing the token count beyond the allowable limit for the LLM, resulting in token limit
errors.
This behavior explains why users experience random failures—it is dependent on the specific data
used in each case. For certain records, the combined input and output may fall within the token limit,
while for others, it may exceed it. This variation is intrinsic to how dynamic templates interact with
large language models.
Salesforce provides guidance in their documentation, stating that prompt template design should
take into account token limits and suggests testing with varied records to avoid such random errors.
It does not mention switching to Flex template type as a solution, nor does it suggest that token
limits fluctuate with user demand. Token limits are a constant defined by the model itself,
independent of external user load.
Reference:
Salesforce Developer Documentation on
Token Limits for Generative AI Models
Salesforce AI Best Practices on Prompt Design (Trailhead or Salesforce blog resources)

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 9

An administrator is responsible for ensuring the security and reliability of Universal Containers' (UC)
CRM dat
a. UC needs enhanced data protection and up-to-date AI capabilities. UC also needs to include
relevant
information from a Salesforce record to be merged with the prompt.
Which feature in the Einstein Trust Layer best supports UC's need?

  • A. Data masking
  • B. Dynamic grounding with secure data retrieval
  • C. Zero-data retention policy
Mark Question:
Answer:

B


Explanation:
Dynamic grounding with secure data retrieval is a key feature in Salesforce's Einstein Trust Layer,
which provides enhanced data protection and ensures that AI-generated outputs are both accurate
and securely sourced. This feature allows relevant Salesforce data to be merged into the AI-
generated responses, ensuring that the AI outputs are contextually aware and aligned with real-time
CRM data.
Dynamic grounding means that AI models are dynamically retrieving relevant information from
Salesforce records (such as customer records, case data, or custom object data) in a secure manner.
This ensures that any sensitive data is protected during AI processing and that the AI model’s outputs
are trustworthy and reliable for business use.
The other options are less aligned with the requirement:
Data masking refers to obscuring sensitive data for privacy purposes and is not related to merging
Salesforce records into prompts.
Zero-data retention policy ensures that AI processes do not store any user data after processing, but
this does not address the need to merge Salesforce record information into a prompt.
Reference:
Salesforce Developer Documentation on
Einstein Trust Layer
Salesforce Security Documentation for AI and
Data Privacy

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 10

A Salesforce Administrator is exploring the capabilities of Einstein Copilot to enhance user
interaction within their organization. They are particularly interested in how Einstein Copilot
processes user requests and the mechanism it employs to deliver responses. The administrator is
evaluating whether Einstein Copilot directly interfaces with a large language model (LLM) to fetch
and display responses to user inquiries, facilitating a broad range of requests from users.
How does Einstein Copilot handle user requests In Salesforce?

  • A. Einstein Copilot will trigger a flow that utilizes a prompt template to generate the message.
  • B. Einstein Copilot will perform an HTTP callout to an LLM provider.
  • C. Einstein Copilot analyzes the user's request and LLM technology is used to generate and display the appropriate response.
Mark Question:
Answer:

C


Explanation:
Einstein Copilot is designed to enhance user interaction within Salesforce by leveraging Large
Language Models (LLMs) to process and respond to user inquiries. When a user submits a request,
Einstein Copilot analyzes the input using natural language processing techniques. It then utilizes LLM
technology to generate an appropriate and contextually relevant response, which is displayed
directly to the user within the Salesforce interface.
Option C accurately describes this process. Einstein Copilot does not necessarily trigger a flow
(Option A) or perform an HTTP callout to an LLM provider (Option B) for each user request. Instead, it
integrates LLM capabilities to provide immediate and intelligent responses, facilitating a broad range
of user requests.
Reference:
Salesforce AI Specialist Documentation - Einstein Copilot Overview: Details how Einstein Copilot
employs LLMs to interpret user inputs and generate responses within the Salesforce ecosystem.
Salesforce Help - How Einstein Copilot Works: Explains the underlying mechanisms of how Einstein
Copilot processes user requests using AI technologies.

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 11

Universal Containers wants to utilize Einstein for Sales to help sales reps reach their sales quotas by
providing Al-generated plans containing guidance and steps for closing deals.
Which feature should the AI Specialist recommend to the sales team?

  • A. Find Similar Deals
  • B. Create Account Plan
  • C. Create Close Plan
Mark Question:
Answer:

C


Explanation:
The "Create Close Plan" feature is designed to help sales reps by providing AI-generated strategies
and steps specifically focused on closing deals. This feature leverages AI to analyze the current state
of opportunities and generate a plan that outlines the actions, timelines, and key steps required to
move deals toward closure. It aligns directly with the sales team’s need to meet quotas by offering
actionable insights and structured plans.
Find Similar Deals (Option A) helps sales reps discover opportunities similar to their current deals but
doesn’t offer a plan for closing.
Create Account Plan (Option B) focuses on long-term strategies for managing accounts, which might
include customer engagement and retention, but doesn’t focus on deal closure.
Salesforce AI Specialist Reference:
For more information on using AI for sales, visit:
https://help.salesforce.com/s/articleView?id=sf.einstein_for_sales_overview.htm

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 12

How does the Einstein Trust Layer ensure that sensitive data is protected while generating useful and
meaningful responses?

  • A. Masked data will be de-masked during response journey.
  • B. Masked data will be de-masked during request journey.
  • C. Responses that do not meet the relevance threshold will be automatically rejected.
Mark Question:
Answer:

A


Explanation:
The Einstein Trust Layer ensures that sensitive data is protected while generating useful and
meaningful responses by masking sensitive data before it is sent to the Large Language Model (LLM)
and then de-masking it during the response journey.
How It Works:
Data Masking in the Request Journey:
Sensitive Data Identification: Before sending the prompt to the LLM, the Einstein Trust Layer scans
the input for sensitive data, such as personally identifiable information (PII), confidential business
information, or any other data deemed sensitive.
Masking Sensitive Data: Identified sensitive data is replaced with placeholders or masks. This ensures
that the LLM does not receive any raw sensitive information, thereby protecting it from potential
exposure.
Processing by the LLM:
Masked Input: The LLM processes the masked prompt and generates a response based on the
masked data.
No Exposure of Sensitive Data: Since the LLM never receives the actual sensitive data, there is no risk
of it inadvertently including that data in its output.
De-masking in the Response Journey:
Re-insertion of Sensitive Data: After the LLM generates a response, the Einstein Trust Layer replaces
the placeholders in the response with the original sensitive data.
Providing Meaningful Responses: This de-masking process ensures that the final response is both
meaningful and complete, including the necessary sensitive information where appropriate.
Maintaining Data Security: At no point is the sensitive data exposed to the LLM or any unintended
recipients, maintaining data security and compliance.
Why Option A is Correct:
De-masking During Response Journey: The de-masking process occurs after the LLM has generated
its response, ensuring that sensitive data is only reintroduced into the output at the final stage,
securely and appropriately.
Balancing Security and Utility: This approach allows the system to generate useful and meaningful
responses that include necessary sensitive information without compromising data security.
Why Options B and C are Incorrect:
Option B (Masked data will be de-masked during request journey):
Incorrect Process: De-masking during the request journey would expose sensitive data before it
reaches the LLM, defeating the purpose of masking and compromising data security.
Option C (Responses that do not meet the relevance threshold will be automatically rejected):
Irrelevant to Data Protection: While the Einstein Trust Layer does enforce relevance thresholds to
filter out inappropriate or irrelevant responses, this mechanism does not directly relate to the
protection of sensitive data. It addresses response quality rather than data security.
Reference:
Salesforce AI Specialist Documentation -
Einstein Trust Layer Overview
:
Explains how the Trust Layer masks sensitive data in prompts and re-inserts it after LLM processing to
protect data privacy.
Salesforce Help -
Data Masking and De-masking Process
:
Details the masking of sensitive data before sending to the LLM and the de-masking process during
the response journey.
Salesforce AI Specialist Exam Guide -
Security and Compliance in AI
:
Outlines the importance of data protection mechanisms like the Einstein Trust Layer in AI
implementations.
Conclusion:
The Einstein Trust Layer ensures sensitive data is protected by masking it before sending any prompts
to the LLM and then de-masking it during the response journey. This process allows Salesforce to
generate useful and meaningful responses that include necessary sensitive information without
exposing that data during the AI processing, thereby maintaining data security and compliance.

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 13

Universal Containers (UC) wants to enable its sales team to get insights into product and competitor
names mentioned during calls.
How should UC meet this requirement?

  • A. Enable Einstein Conversation Insights, assign permission sets, define recording managers, and customize insights with up to 50 competitor names.
  • B. Enable Einstein Conversation Insights, connect a recording provider, assign permission sets, and customize insights with up to 25 products.
  • C. Enable Einstein Conversation Insights, enable sales recording, assign permission sets, and customize insights with up to 50 products.
Mark Question:
Answer:

C


Explanation:
To provide the sales team with insights into product and competitor names mentioned during calls,
Universal Containers should:
Enable Einstein Conversation Insights: Activates the feature that analyzes call recordings for valuable
insights.
Enable Sales Recording: Allows calls to be recorded within Salesforce without needing an external
recording provider.
Assign Permission Sets: Grants the necessary permissions to sales team members to access and
utilize conversation insights.
Customize Insights: Configure the system to track mentions of up to 50 products and 50 competitors,
providing tailored insights relevant to the organization's needs.
Option C accurately reflects these steps. Option A mentions defining recording managers but omits
enabling sales recording within Salesforce. Option B suggests connecting a recording provider and
limits customization to 25 products, which does not fully meet UC's requirements.
Reference:
Salesforce AI Specialist Documentation - Setting Up Einstein Conversation Insights: Provides
instructions on enabling conversation insights and sales recording.
Salesforce Help - Customizing Conversation Insights: Details how to customize insights with up to 50
products and competitors.
Salesforce AI Specialist Exam Guide: Outlines best practices for implementing AI features like Einstein
Conversation Insights in a sales context.
=========================

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 14

What is the role of the large language model (LLM) in executing an Einstein Copilot Action?

  • A. Find similar requests and provide actions that need to be executed
  • B. Identify the best matching actions and correct order of execution
  • C. Determine a user's access and sort actions by priority to be executed
Mark Question:
Answer:

B


Explanation:
In Einstein Copilot, the role of the Large Language Model (LLM) is to analyze user inputs and identify
the best matching actions that need to be executed. It uses natural language understanding to break
down the user’s request and determine the correct sequence of actions that should be performed.
By doing so, the LLM ensures that the tasks and actions executed are contextually relevant and are
performed in the proper order. This process provides a seamless, AI-enhanced experience for users
by matching their requests to predefined Salesforce actions or flows.
The other options are incorrect because:
A mentions finding similar requests, which is not the primary role of the LLM in this context.
C focuses on access and sorting by priority, which is handled more by security models and
governance than by the LLM.
Reference:
Salesforce Einstein Documentation on Einstein Copilot Actions
Salesforce AI Documentation on Large Language Models

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000

Question 15

A service agent is looking at a custom object that stores travel information. They recently received a
weather alert and now need to cancel flights for the customers that are related with this itinerary.
The service agent needs to review the Knowledge articles about canceling and
rebooking the customer flights.
Which Einstein Copilot capability helps the agent accomplish this?

  • A. Execute tasks based on available actions, answering questions using information from accessible Knowledge articles.
  • B. Invoke a flow which makes a call to external data to create a Knowledge article.
  • C. Generate a Knowledge article based off the prompts that the agent enters to create steps to cancel flights.
Mark Question:
Answer:

C


Explanation:
In this scenario, the Einstein Copilot capability that best helps the agent is its ability to execute tasks
based on available actions and answer questions using data from Knowledge articles. Einstein Copilot
can assist the service agent by providing relevant Knowledge articles on canceling and rebooking
flights, ensuring that the agent has access to the correct steps and procedures directly within the
workflow.
This feature leverages the agent’s existing context (the travel itinerary) and provides actionable
insights or next steps from the relevant Knowledge articles to help the agent quickly resolve the
customer’s needs.
The other options are incorrect:
B refers to invoking a flow to create a Knowledge article, which is unrelated to the task of retrieving
existing Knowledge articles.
C focuses on generating Knowledge articles, which is not the immediate need for this situation
where the agent requires guidance on existing procedures.
Reference:
Salesforce Documentation on
Einstein Copilot
Trailhead Module on
Einstein for Service

User Votes:
A
50%
B
50%
C
50%
Discussions
vote your answer:
A
B
C
0 / 1000
To page 2