The Fact Maker

New Resources from ISACA Provide Audit and Assurance Guidance for the NIST Cybersecurity Framework 2.0 and Artificial Intelligence

The ISACA Cybersecurity Audit Program: Based on NIST Cybersecurity Framework 2.0 updates ISACA’s 2016 IS Audit/Assurance Cybersecurity Program to include new content that reflects the changes in the NIST CSF 2.0. It covers the six functions of NIST CSF 2.0—govern, identify, protect, detect, respond and recover—delving into categories including cybersecurity supply chain risk management, platform security, adverse event analysis, and incident recovery plan execution, among others. The audit program enables auditors to verify compliance with the NIST CSF 2.0, assess the effectiveness of security controls, policies, procedures, and programs, communicate control status and cybersecurity preparedness with management and other key stakeholders, and identify areas of current or emerging risk for the organization.

The NIST Cybersecurity Framework 2.0 audit program features improved functionality for the standard Excel spreadsheet, with more columns to track the auditor’s opinion and testing observations, as well as a worksheet with summary charts. It also includes a new Word document format. New recommended request list items were also added to each subcategory, with a newly created appendix that summarizes the request list. Additionally, the audit program now includes an evaluation worksheet that auditors can use to document the evaluation of NIST CSF 2.0 subcategory implementation status.

While there is currently not one standardized framework or methodology for auditing AI, auditors seeking to gain a deeper understanding of AI controls can leverage the Artificial Intelligence Audit Toolkit, a library of AI controls derived from select control frameworks and law, meant to help auditors better understand how these controls relate to different aspects of the AI lifecycle.

The assessment guide part of Artificial Intelligence Audit Toolkit provides a methodology to evaluate control design and operating effectiveness of AI-enabled systems, tools and processes. It covers controls in a series of control families and categories across a range of areas, including AI bias mitigation and fairness, AI data privacy and rights, human-AI interaction and experience, and secure systems design and development. Additionally, it walks through the six dimensions of AI explainability—rationale, responsibility, data, fairness, safety, performance and impact—as well as the key elements entailed in the assessment development approach—control synthesis and mapping, and explainability integration. The Excel-based toolkit provides a comprehensive resource to support AI assessment efforts, with spreadsheets that provide guidance related to the AI control assessment pertaining to each explainability dimension.

“The digital trust professionals in ISACA’s global community are working in fields that are constantly evolving, and ISACA is committed to walking alongside them with the tools, resources and best practices they need to do their jobs effectively,” says Lisa Cook, ISACA GRC Professional Practices Principal. “During periods of uncertainty with technology or regulations that are in their nascent stage—such as with AI—it is especially important to ensure the professional community is equipped and supported.”