isaca aaism practice test

ISACA Advanced in AI Security Management Exam

Last exam update: Nov 18 ,2025
Page 1 out of 6. Viewing questions 1-15 out of 90

Question 1

A financial institution plans to deploy an AI system to provide credit risk assessments for loan
applications. Which of the following should be given the HIGHEST priority in the system’s design to
ensure ethical decision-making and prevent bias?

  • A. Regularly update the model with new customer data to improve prediction accuracy.
  • B. Integrate a mechanism for customers to appeal decisions directly within the system.
  • C. Train the system to provide advisory outputs with final decisions made by human experts.
  • D. Restrict the model’s decision-making criteria to objective financial metrics only.
Mark Question:
Answer:

C


Explanation:
In AI governance frameworks, credit scoring is treated as a high-risk application. For such systems,
the highest-priority safeguard is human oversight to ensure fairness, accountability, and prevention
of bias in automated decisions.
The AI Security Management™ (AAISM) domain of AI Governance and Program Management
emphasizes that high-impact AI systems require explicit governance structures and human
accountability. Human-in-the-loop design ensures that final decisions remain the responsibility of
human experts rather than being fully automated. This is particularly critical in financial contexts,
where biased outputs can affect individuals’ access to credit and create compliance risks.
Official ISACA AI governance guidance specifies:
High-risk AI systems must comply with strict requirements, including human oversight, transparency,
and fairness.
The purpose of human oversight is to reduce risks to fundamental rights by ensuring humans can
intervene or override an automated decision.
Bias controls are strengthened by requiring human review processes that can analyze outputs and
prevent unfair discrimination.
Why other options are not the highest priority:
A . Regular updates improve accuracy but do not guarantee fairness or ethical decision-making.
Model drift can introduce new bias if not governed properly.
B . Appeals mechanisms are important for accountability, but they operate after harm has occurred.
Governance frameworks emphasize prevention through human oversight in the decision loop.
D . Restricting criteria to “objective metrics” is insufficient, as even objective data can contain hidden
proxies for protected attributes. Bias mitigation requires monitoring, testing, and human oversight,
not only feature restriction.
AAISM Domain Alignment:
Domain 1 – AI Governance and Program Management: Ensures accountability, ethical oversight, and
governance structures.
Domain 2 – AI Risk Management: Identifies and mitigates risks such as bias, discrimination, and lack
of transparency.
Domain 3 – AI Technologies and Controls: Provides the technical enablers for implementing oversight
mechanisms and bias detection tools.
Reference from AAISM and ISACA materials:
AAISM Exam Content Outline – Domain 1: AI Governance and Program Management (roles,
responsibilities, oversight).
ISACA AI Governance Guidance (human oversight as mandatory in high-risk AI applications).
Bias and Fairness Controls in AI (human review and intervention as a primary safeguard).

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

A retail organization implements an AI-driven recommendation system that utilizes customer
purchase history. Which of the following is the BEST way for the organization to ensure privacy and
comply with regulatory standards?

  • A. Conducting quarterly retraining of the AI model to maintain the accuracy of recommendations
  • B. Maintaining a register of legal and regulatory requirements for privacy
  • C. Establishing a governance committee to oversee AI privacy practices
  • D. Storing customer data indefinitely to ensure the AI model has a complete history
Mark Question:
Answer:

B


Explanation:
According to the AI Security Management™ (AAISM) study framework, compliance with privacy and
regulatory standards must begin with a formalized process of identifying, documenting, and
maintaining applicable obligations. The guidance explicitly notes that organizations should maintain
a comprehensive register of legal and regulatory requirements to ensure accountability and
alignment with privacy laws. This register serves as the foundation for all governance, risk, and
control practices surrounding AI systems that handle personal data.
Maintaining such a register ensures that the recommendation system operates under the principles
of privacy by design and privacy by default. It allows decision-makers and auditors to trace every AI
data processing activity back to relevant compliance obligations, thereby demonstrating adherence
to laws such as GDPR, CCPA, or other jurisdictional mandates.
Other measures listed in the options contribute to good practice but do not achieve the same direct
compliance outcome. Retraining models improves technical accuracy but does not address legal
obligations. Oversight committees are valuable but require the documented register as a baseline to
oversee effectively. Indefinite storage of customer data contradicts regulatory requirements,
particularly the principle of data minimization and storage limitation.
AAISM Domain Alignment:
This requirement falls under Domain 1 – AI Governance and Program Management, which
emphasizes organizational accountability, policy creation, and maintaining compliance
documentation as part of a structured governance program.
Reference from AAISM and ISACA materials:
AAISM Exam Content Outline – Domain 1: AI Governance and Program Management
AI Security Management Study Guide – Privacy and Regulatory Compliance Controls
ISACA AI Governance Guidance – Maintaining Registers of Applicable Legal Requirements

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

An organization is updating its vendor arrangements to facilitate the safe adoption of AI
technologies. Which of the following would be the PRIMARY challenge in delivering this initiative?

  • A. Failure to adequately assess AI risk
  • B. Inability to sufficiently identify shadow AI within the organization
  • C. Unwillingness of large AI companies to accept updated terms
  • D. Insufficient legal team experience with AI
Mark Question:
Answer:

C


Explanation:
In the AAISM™ guidance, vendor management for AI adoption highlights that large AI providers
often resist contractual changes, particularly when customers seek to impose stricter security,
transparency, or ethical obligations. The official study materials emphasize that while organizations
must evaluate AI risk and build internal expertise, the primary challenge lies in negotiating
acceptable contractual terms with dominant AI vendors who may not be willing to adjust their
standardized agreements. This resistance limits the ability of organizations to enforce oversight, bias
controls, and compliance requirements contractually.
Reference:
AAISM Exam Content Outline – AI Risk Management
AI Security Management Study Guide – Third-Party and Vendor Risk

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

After implementing a third-party generative AI tool, an organization learns about new regulations
related to how organizations use AI. Which of the following would be the BEST justification for the
organization to decide not to comply?

  • A. The AI tool is widely used within the industry
  • B. The AI tool is regularly audited
  • C. The risk is within the organization’s risk appetite
  • D. The cost of noncompliance was not determined
Mark Question:
Answer:

C


Explanation:
The AAISM framework clarifies that compliance decisions must always be tied to an organization’s
risk appetite and tolerance. When new regulations emerge, management may choose not to comply
if the associated risk remains within the documented and approved risk appetite, provided that
accountability is established and governance structures support this decision. Other options such as
widespread industry use, third-party audits, or lack of cost assessment do not justify noncompliance
under the governance principles. The risk appetite framework is the only recognized justification
under AI governance principles.
Reference:
AAISM Study Guide – AI Governance and Program Management
ISACA AI Risk Guidance – Risk Appetite and Compliance Decisions

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

Which of the following is the MOST serious consequence of an AI system correctly guessing the
personal information of individuals and drawing conclusions based on that information?

  • A. The exposure of personal information may result in litigation
  • B. The publicly available output of the model may include false or defamatory statements about individuals
  • C. The output may reveal information about individuals or groups without their knowledge
  • D. The exposure of personal information may lead to a decline in public trust
Mark Question:
Answer:

C


Explanation:
The AAISM curriculum states that the most serious privacy concern occurs when AI systems infer and
disclose sensitive personal or group information without the knowledge or consent of the
individuals. This constitutes a direct breach of privacy rights and data protection principles, including
those enshrined in GDPR and other global regulations. While litigation, reputational damage, or loss
of trust are significant consequences, the unauthorized revelation of personal information through
inference is classified as the most severe, because it directly undermines individual autonomy and
confidentiality.
Reference:
AAISM Exam Content Outline – AI Risk Management
AI Security Management Study Guide – Privacy and Confidentiality Risks

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

Which of the following should be done FIRST when developing an acceptable use policy for
generative AI?

  • A. Determine the scope and intended use of AI
  • B. Review AI regulatory requirements
  • C. Consult with risk management and legal
  • D. Review existing company policies
Mark Question:
Answer:

A


Explanation:
According to the AAISM framework, the first step in drafting an acceptable use policy is defining the
scope and intended use of the AI system. This ensures that governance, regulatory considerations,
risk assessments, and alignment with organizational policies are all tailored to the specific
applications and functions the AI will serve. Once scope and intended use are clearly defined, legal,
regulatory, and risk considerations can be systematically applied. Without this step, policies risk
being generic and misaligned with business objectives.
Reference:
AAISM Study Guide – AI Governance and Program Management (Policy Development Lifecycle)
ISACA AI Governance Guidance – Defining Scope and Use Priorities

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A model producing contradictory outputs based on highly similar inputs MOST likely indicates the
presence of:

  • A. Poisoning attacks
  • B. Evasion attacks
  • C. Membership inference
  • D. Model exfiltration
Mark Question:
Answer:

B


Explanation:
The AAISM study framework describes evasion attacks as attempts to manipulate or probe a trained
model during inference by using crafted inputs that appear normal but cause the system to generate
inconsistent or erroneous outputs. Contradictory results from nearly identical queries are a typical
symptom of evasion, as the attacker is probing decision boundaries to find weaknesses. Poisoning
attacks occur during training, not inference, while membership inference relates to exposing
whether data was part of the training set, and model exfiltration involves extracting proprietary
parameters or architecture. The clearest indication of contradictory outputs from similar queries
therefore aligns directly with the definition of evasion attacks in AAISM materials.
Reference:
AAISM Study Guide – AI Technologies and Controls (Adversarial Machine Learning and Attack Types)
ISACA AI Security Management – Inference-time Attack Scenarios

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

Which of the following recommendations would BEST help a service provider mitigate the risk of
lawsuits arising from generative AI’s access to and use of internet data?

  • A. Activate filtering logic to exclude intellectual property flags
  • B. Disclose service provider policies to declare compliance with regulations
  • C. Appoint a data steward specialized in AI to strengthen security governance
  • D. Review log information that records how data was collected
Mark Question:
Answer:

A


Explanation:
The AAISM materials highlight that one of the primary legal risks with generative AI systems is the
unauthorized use of copyrighted or intellectual property–protected data drawn from internet
sources. To mitigate lawsuits, the most effective recommendation is to implement filtering logic that
actively excludes data flagged for intellectual property risks before ingestion or generation. While
disclosing compliance policies, appointing governance roles, or reviewing logs are supportive
measures, they do not directly prevent the core liability of using restricted content. The study guide
explicitly emphasizes that proactive filtering and data governance controls are the most effective
safeguards against legal disputes concerning content origin.
Reference:
AAISM Exam Content Outline – AI Risk Management (Legal and Intellectual Property Risks)
AI Security Management Study Guide – Generative AI Data Governance

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

Which of the following is the BEST approach for minimizing risk when integrating acceptable use
policies for AI foundation models into business operations?

  • A. Limit model usage to predefined scenarios specified by the developer
  • B. Rely on the developer's enforcement mechanisms
  • C. Establish AI model life cycle policy and procedures
  • D. Implement responsible development training and awareness
Mark Question:
Answer:

C


Explanation:
The AAISM guidance defines risk minimization for AI deployment as requiring a formalized AI model
life cycle policy and associated procedures. This ensures oversight from design to deployment,
covering data handling, bias testing, monitoring, retraining, decommissioning, and acceptable use.
Limiting usage to developer-defined scenarios or relying on vendor mechanisms transfers
responsibility away from the organization and fails to meet governance expectations. Training and
awareness support cultural alignment but cannot substitute for structured lifecycle controls.
Therefore, the establishment of a documented lifecycle policy and procedures is the most
comprehensive way to minimize operational, compliance, and ethical risks in integrating foundation
models.
Reference:
AAISM Study Guide – AI Governance and Program Management (Model Lifecycle Governance)
ISACA AI Security Guidance – Policies and Lifecycle Management

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

Which of the following metrics BEST evaluates the ability of a model to correctly identify all true
positive instances?

  • A. F1 score
  • B. Recall
  • C. Precision
  • D. Specificity
Mark Question:
Answer:

B


Explanation:
AAISM technical coverage identifies recall as the metric that specifically measures a model’s ability
to capture all true positive cases out of the total actual positives. A high recall means the system
minimizes false negatives, ensuring that relevant instances are not overlooked. Precision instead
measures correctness among predicted positives, specificity focuses on true negatives, and the F1
score balances precision and recall but does not by itself indicate the completeness of capturing
positives. The official study guide defines recall as the most direct metric for evaluating how well a
model identifies all relevant positive cases, making it the correct answer.
Reference:
AAISM Study Guide – AI Technologies and Controls (Evaluation Metrics and Model Performance)
ISACA AI Security Management – Model Accuracy and Completeness Assessments

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

An organization uses an AI tool to scan social media for product reviews. Fraudulent social media
accounts begin posting negative reviews attacking the organization's product. Which type of AI attack
is MOST likely to have occurred?

  • A. Model inversion
  • B. Deepfake
  • C. Availability attack
  • D. Data poisoning
Mark Question:
Answer:

C


Explanation:
The AAISM materials classify availability attacks as attempts to disrupt or degrade the functioning of
an AI system so that its outputs become unreliable or unusable. In this scenario, the fraudulent social
media accounts are deliberately overwhelming the AI tool with misleading negative reviews,
undermining its ability to deliver accurate sentiment analysis. This aligns directly with the concept of
an availability attack. Model inversion relates to reconstructing training data from outputs, deepfakes
involve synthetic content generation, and data poisoning corrupts the training set rather than
manipulating inputs at runtime. Therefore, the fraudulent review campaign is most accurately
identified as an availability attack.
Reference:
AAISM Study Guide – AI Risk Management (Adversarial Threats and Availability Risks)
ISACA AI Security Management – Attack Classifications

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which
of the following types of attacks is this an example of?

  • A. Prompt injection
  • B. Jailbreaking
  • C. Remote code execution
  • D. Evasion
Mark Question:
Answer:

A


Explanation:
According to the AAISM framework, prompt injection is the act of deliberately crafting malicious or
manipulative inputs to override, bypass, or exploit the model’s intended controls. In this case, the
attacker is targeting the integrity of the model’s outputs by exploiting weaknesses in how it
interprets and processes prompts. Jailbreaking is a subtype of prompt injection specifically designed
to override safety restrictions, while evasion attacks target classification boundaries in other ML
contexts, and remote code execution refers to system-level exploitation outside of the AI inference
context. The most accurate classification of this attack is prompt injection.
Reference:
AAISM Exam Content Outline – AI Technologies and Controls (Prompt Security and Input
Manipulation)
AI Security Management Study Guide – Threats to Output Integrity

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

An organization using an AI model for financial forecasting identifies inaccuracies caused by missing
data. Which of the following is the MOST effective data cleaning technique to improve model
performance?

  • A. Increasing the frequency of model retraining with the existing data set
  • B. Applying statistical methods to address missing data and reduce bias
  • C. Deleting outlier data points to prevent unusual values impacting the model
  • D. Tuning model hyperparameters to increase performance and accuracy
Mark Question:
Answer:

B


Explanation:
The AAISM study content emphasizes that data quality management is a central pillar of AI risk
reduction. Missing data introduces bias and undermines predictive accuracy if not addressed
systematically. The most effective remediation is to apply statistical imputation and related methods
to fill in or adjust for missing values in a way that minimizes bias and preserves data integrity.
Retraining on flawed data does not solve the underlying issue. Deleting outliers may harm model
robustness, and hyperparameter tuning optimizes model mechanics but cannot resolve missing
information. Therefore, the proper corrective technique for missing data is the application of
statistical methods to reduce bias.
Reference:
AAISM Study Guide – AI Risk Management (Data Integrity and Quality Controls)
ISACA AI Governance Guidance – Data Preparation and Bias Mitigation

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

Which of the following is MOST important to consider when validating a third-party AI tool?

  • A. Terms and conditions
  • B. Right to audit
  • C. Industry analysis and certifications
  • D. Roundtable testing
Mark Question:
Answer:

B


Explanation:
The AAISM framework specifies that when adopting third-party AI tools, the right to audit is the most
critical contractual and governance safeguard. This ensures that the organization can independently
verify compliance with security, privacy, and ethical requirements throughout the lifecycle of the
tool. Terms and conditions provide general usage guidance but often limit liability rather than
ensuring transparency. Industry certifications may indicate good practice but do not substitute for
direct verification. Roundtable testing is useful for evaluation but lacks enforceability. Only the
contractual right to audit provides formal assurance that the tool operates in accordance with
organizational policies and external regulations.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Third-Party Governance)
AI Security Management Study Guide – Vendor Oversight and Audit Rights

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

Which of the following is the BEST mitigation control for membership inference attacks on AI
systems?

  • A. Model ensemble techniques
  • B. AI threat modeling
  • C. Differential privacy
  • D. Cybersecurity-oriented red teaming
Mark Question:
Answer:

C


Explanation:
Membership inference attacks attempt to determine whether a particular data point was part of a
model’s training set, which risks violating privacy. The AAISM study guide highlights differential
privacy as the most effective mitigation because it introduces mathematical noise that obscures
individual contributions without significantly degrading model performance. Ensemble methods
improve robustness but do not specifically protect privacy. Threat modeling and red teaming help
identify risks but are not direct controls. The explicit mitigation control aligned with privacy
preservation for membership inference is differential privacy.
Reference:
AAISM Study Guide – AI Technologies and Controls (Privacy-Preserving Techniques)
ISACA AI Security Management – Membership Inference Mitigations

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2